2023-07-12 08:18:10,184 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6 2023-07-12 08:18:10,205 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-12 08:18:10,231 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 08:18:10,232 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19, deleteOnExit=true 2023-07-12 08:18:10,232 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 08:18:10,232 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/test.cache.data in system properties and HBase conf 2023-07-12 08:18:10,233 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 08:18:10,233 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir in system properties and HBase conf 2023-07-12 08:18:10,234 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 08:18:10,234 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 08:18:10,234 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 08:18:10,389 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-12 08:18:10,887 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 08:18:10,892 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 08:18:10,893 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 08:18:10,893 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 08:18:10,893 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 08:18:10,894 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 08:18:10,894 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 08:18:10,895 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 08:18:10,895 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 08:18:10,895 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 08:18:10,896 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/nfs.dump.dir in system properties and HBase conf 2023-07-12 08:18:10,896 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/java.io.tmpdir in system properties and HBase conf 2023-07-12 08:18:10,896 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 08:18:10,896 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 08:18:10,897 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 08:18:11,453 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 08:18:11,458 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 08:18:11,793 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-12 08:18:11,990 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-12 08:18:12,012 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:12,049 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:12,082 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/java.io.tmpdir/Jetty_localhost_37431_hdfs____.n4tk88/webapp 2023-07-12 08:18:12,226 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37431 2023-07-12 08:18:12,265 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 08:18:12,266 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 08:18:12,718 WARN [Listener at localhost/42813] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:12,796 WARN [Listener at localhost/42813] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 08:18:12,818 WARN [Listener at localhost/42813] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:12,825 INFO [Listener at localhost/42813] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:12,831 INFO [Listener at localhost/42813] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/java.io.tmpdir/Jetty_localhost_38651_datanode____q5b22c/webapp 2023-07-12 08:18:12,939 INFO [Listener at localhost/42813] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38651 2023-07-12 08:18:13,387 WARN [Listener at localhost/44453] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:13,439 WARN [Listener at localhost/44453] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 08:18:13,443 WARN [Listener at localhost/44453] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:13,445 INFO [Listener at localhost/44453] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:13,455 INFO [Listener at localhost/44453] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/java.io.tmpdir/Jetty_localhost_40241_datanode____opgxwa/webapp 2023-07-12 08:18:13,565 INFO [Listener at localhost/44453] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40241 2023-07-12 08:18:13,598 WARN [Listener at localhost/41129] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:13,631 WARN [Listener at localhost/41129] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 08:18:13,639 WARN [Listener at localhost/41129] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:13,641 INFO [Listener at localhost/41129] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:13,650 INFO [Listener at localhost/41129] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/java.io.tmpdir/Jetty_localhost_38283_datanode____xoxzbj/webapp 2023-07-12 08:18:13,791 INFO [Listener at localhost/41129] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38283 2023-07-12 08:18:13,804 WARN [Listener at localhost/44853] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:14,131 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc5b11bf44c541bfa: Processing first storage report for DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b from datanode 686f060b-5b06-4943-8d33-0849b15379b1 2023-07-12 08:18:14,133 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc5b11bf44c541bfa: from storage DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b node DatanodeRegistration(127.0.0.1:32775, datanodeUuid=686f060b-5b06-4943-8d33-0849b15379b1, infoPort=36169, infoSecurePort=0, ipcPort=44453, storageInfo=lv=-57;cid=testClusterID;nsid=1258570357;c=1689149891542), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-12 08:18:14,133 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc29eb363b7f1dfa2: Processing first storage report for DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3 from datanode eb0c195d-22c8-4e63-8cc0-6218a9f8c698 2023-07-12 08:18:14,133 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc29eb363b7f1dfa2: from storage DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3 node DatanodeRegistration(127.0.0.1:46329, datanodeUuid=eb0c195d-22c8-4e63-8cc0-6218a9f8c698, infoPort=40785, infoSecurePort=0, ipcPort=41129, storageInfo=lv=-57;cid=testClusterID;nsid=1258570357;c=1689149891542), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:14,133 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xee9e7a68d08cb6b0: Processing first storage report for DS-210f6c6b-127c-4179-bc3c-20e846cc6403 from datanode 7a7b6c1a-e024-4635-8442-040a2924521b 2023-07-12 08:18:14,133 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xee9e7a68d08cb6b0: from storage DS-210f6c6b-127c-4179-bc3c-20e846cc6403 node DatanodeRegistration(127.0.0.1:41167, datanodeUuid=7a7b6c1a-e024-4635-8442-040a2924521b, infoPort=35021, infoSecurePort=0, ipcPort=44853, storageInfo=lv=-57;cid=testClusterID;nsid=1258570357;c=1689149891542), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:14,134 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc5b11bf44c541bfa: Processing first storage report for DS-5c43a216-9f61-43b2-ab78-29b3d995ca02 from datanode 686f060b-5b06-4943-8d33-0849b15379b1 2023-07-12 08:18:14,134 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc5b11bf44c541bfa: from storage DS-5c43a216-9f61-43b2-ab78-29b3d995ca02 node DatanodeRegistration(127.0.0.1:32775, datanodeUuid=686f060b-5b06-4943-8d33-0849b15379b1, infoPort=36169, infoSecurePort=0, ipcPort=44453, storageInfo=lv=-57;cid=testClusterID;nsid=1258570357;c=1689149891542), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:14,134 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc29eb363b7f1dfa2: Processing first storage report for DS-367caf59-41ab-4959-a9cd-9b769c34b9fc from datanode eb0c195d-22c8-4e63-8cc0-6218a9f8c698 2023-07-12 08:18:14,134 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc29eb363b7f1dfa2: from storage DS-367caf59-41ab-4959-a9cd-9b769c34b9fc node DatanodeRegistration(127.0.0.1:46329, datanodeUuid=eb0c195d-22c8-4e63-8cc0-6218a9f8c698, infoPort=40785, infoSecurePort=0, ipcPort=41129, storageInfo=lv=-57;cid=testClusterID;nsid=1258570357;c=1689149891542), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:14,134 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xee9e7a68d08cb6b0: Processing first storage report for DS-53123b62-e46e-43c6-9aae-7519507601cf from datanode 7a7b6c1a-e024-4635-8442-040a2924521b 2023-07-12 08:18:14,135 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xee9e7a68d08cb6b0: from storage DS-53123b62-e46e-43c6-9aae-7519507601cf node DatanodeRegistration(127.0.0.1:41167, datanodeUuid=7a7b6c1a-e024-4635-8442-040a2924521b, infoPort=35021, infoSecurePort=0, ipcPort=44853, storageInfo=lv=-57;cid=testClusterID;nsid=1258570357;c=1689149891542), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 08:18:14,415 DEBUG [Listener at localhost/44853] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6 2023-07-12 08:18:14,510 INFO [Listener at localhost/44853] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/zookeeper_0, clientPort=51057, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 08:18:14,528 INFO [Listener at localhost/44853] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51057 2023-07-12 08:18:14,536 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:14,539 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:15,222 INFO [Listener at localhost/44853] util.FSUtils(471): Created version file at hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9 with version=8 2023-07-12 08:18:15,222 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/hbase-staging 2023-07-12 08:18:15,231 DEBUG [Listener at localhost/44853] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 08:18:15,231 DEBUG [Listener at localhost/44853] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 08:18:15,231 DEBUG [Listener at localhost/44853] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 08:18:15,231 DEBUG [Listener at localhost/44853] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 08:18:15,628 INFO [Listener at localhost/44853] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-12 08:18:16,220 INFO [Listener at localhost/44853] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:16,265 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:16,265 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:16,266 INFO [Listener at localhost/44853] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:16,266 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:16,266 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:16,423 INFO [Listener at localhost/44853] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:16,526 DEBUG [Listener at localhost/44853] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-12 08:18:16,632 INFO [Listener at localhost/44853] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44301 2023-07-12 08:18:16,648 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:16,650 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:16,676 INFO [Listener at localhost/44853] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44301 connecting to ZooKeeper ensemble=127.0.0.1:51057 2023-07-12 08:18:16,732 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:443010x0, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:16,735 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44301-0x101589c725b0000 connected 2023-07-12 08:18:16,764 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:16,765 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:16,770 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:16,787 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44301 2023-07-12 08:18:16,787 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44301 2023-07-12 08:18:16,788 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44301 2023-07-12 08:18:16,790 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44301 2023-07-12 08:18:16,791 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44301 2023-07-12 08:18:16,829 INFO [Listener at localhost/44853] log.Log(170): Logging initialized @7385ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-12 08:18:16,975 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:16,976 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:16,977 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:16,979 INFO [Listener at localhost/44853] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 08:18:16,980 INFO [Listener at localhost/44853] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:16,980 INFO [Listener at localhost/44853] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:16,984 INFO [Listener at localhost/44853] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:17,072 INFO [Listener at localhost/44853] http.HttpServer(1146): Jetty bound to port 39471 2023-07-12 08:18:17,074 INFO [Listener at localhost/44853] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:17,116 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,121 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7a39ade6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:17,122 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,122 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5320c268{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:17,205 INFO [Listener at localhost/44853] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:17,218 INFO [Listener at localhost/44853] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:17,218 INFO [Listener at localhost/44853] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:17,220 INFO [Listener at localhost/44853] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 08:18:17,228 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,256 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@64480317{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 08:18:17,268 INFO [Listener at localhost/44853] server.AbstractConnector(333): Started ServerConnector@71df00d8{HTTP/1.1, (http/1.1)}{0.0.0.0:39471} 2023-07-12 08:18:17,268 INFO [Listener at localhost/44853] server.Server(415): Started @7825ms 2023-07-12 08:18:17,272 INFO [Listener at localhost/44853] master.HMaster(444): hbase.rootdir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9, hbase.cluster.distributed=false 2023-07-12 08:18:17,363 INFO [Listener at localhost/44853] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:17,364 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:17,364 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:17,365 INFO [Listener at localhost/44853] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:17,365 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:17,365 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:17,374 INFO [Listener at localhost/44853] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:17,377 INFO [Listener at localhost/44853] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36999 2023-07-12 08:18:17,380 INFO [Listener at localhost/44853] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 08:18:17,387 DEBUG [Listener at localhost/44853] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 08:18:17,388 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:17,390 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:17,392 INFO [Listener at localhost/44853] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36999 connecting to ZooKeeper ensemble=127.0.0.1:51057 2023-07-12 08:18:17,396 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:369990x0, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:17,400 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36999-0x101589c725b0001 connected 2023-07-12 08:18:17,402 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:17,405 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:17,408 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:17,410 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36999 2023-07-12 08:18:17,411 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36999 2023-07-12 08:18:17,411 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36999 2023-07-12 08:18:17,412 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36999 2023-07-12 08:18:17,413 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36999 2023-07-12 08:18:17,416 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:17,416 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:17,417 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:17,418 INFO [Listener at localhost/44853] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 08:18:17,418 INFO [Listener at localhost/44853] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:17,419 INFO [Listener at localhost/44853] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:17,419 INFO [Listener at localhost/44853] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:17,422 INFO [Listener at localhost/44853] http.HttpServer(1146): Jetty bound to port 34823 2023-07-12 08:18:17,422 INFO [Listener at localhost/44853] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:17,431 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,431 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6afee7fb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:17,432 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,432 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6ce589e{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:17,445 INFO [Listener at localhost/44853] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:17,446 INFO [Listener at localhost/44853] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:17,446 INFO [Listener at localhost/44853] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:17,447 INFO [Listener at localhost/44853] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 08:18:17,448 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,451 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2e98cdce{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:17,453 INFO [Listener at localhost/44853] server.AbstractConnector(333): Started ServerConnector@6d447d66{HTTP/1.1, (http/1.1)}{0.0.0.0:34823} 2023-07-12 08:18:17,453 INFO [Listener at localhost/44853] server.Server(415): Started @8010ms 2023-07-12 08:18:17,466 INFO [Listener at localhost/44853] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:17,466 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:17,466 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:17,467 INFO [Listener at localhost/44853] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:17,467 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:17,467 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:17,467 INFO [Listener at localhost/44853] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:17,469 INFO [Listener at localhost/44853] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42347 2023-07-12 08:18:17,470 INFO [Listener at localhost/44853] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 08:18:17,471 DEBUG [Listener at localhost/44853] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 08:18:17,472 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:17,474 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:17,475 INFO [Listener at localhost/44853] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42347 connecting to ZooKeeper ensemble=127.0.0.1:51057 2023-07-12 08:18:17,480 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): regionserver:423470x0, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:17,481 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:423470x0, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:17,482 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): regionserver:423470x0, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:17,482 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42347-0x101589c725b0002 connected 2023-07-12 08:18:17,483 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:17,487 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42347 2023-07-12 08:18:17,487 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42347 2023-07-12 08:18:17,490 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42347 2023-07-12 08:18:17,490 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42347 2023-07-12 08:18:17,493 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42347 2023-07-12 08:18:17,495 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:17,495 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:17,495 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:17,496 INFO [Listener at localhost/44853] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 08:18:17,496 INFO [Listener at localhost/44853] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:17,496 INFO [Listener at localhost/44853] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:17,496 INFO [Listener at localhost/44853] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:17,497 INFO [Listener at localhost/44853] http.HttpServer(1146): Jetty bound to port 36053 2023-07-12 08:18:17,497 INFO [Listener at localhost/44853] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:17,500 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,501 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ee296f1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:17,501 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,501 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@35b16dd4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:17,510 INFO [Listener at localhost/44853] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:17,511 INFO [Listener at localhost/44853] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:17,511 INFO [Listener at localhost/44853] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:17,511 INFO [Listener at localhost/44853] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 08:18:17,515 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,516 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@66df3ef2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:17,517 INFO [Listener at localhost/44853] server.AbstractConnector(333): Started ServerConnector@44d8fb02{HTTP/1.1, (http/1.1)}{0.0.0.0:36053} 2023-07-12 08:18:17,517 INFO [Listener at localhost/44853] server.Server(415): Started @8074ms 2023-07-12 08:18:17,535 INFO [Listener at localhost/44853] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:17,535 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:17,536 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:17,536 INFO [Listener at localhost/44853] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:17,536 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:17,536 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:17,536 INFO [Listener at localhost/44853] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:17,538 INFO [Listener at localhost/44853] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38647 2023-07-12 08:18:17,538 INFO [Listener at localhost/44853] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 08:18:17,543 DEBUG [Listener at localhost/44853] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 08:18:17,544 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:17,546 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:17,547 INFO [Listener at localhost/44853] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38647 connecting to ZooKeeper ensemble=127.0.0.1:51057 2023-07-12 08:18:17,552 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:386470x0, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:17,553 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38647-0x101589c725b0003 connected 2023-07-12 08:18:17,553 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:17,554 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:17,554 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:17,555 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38647 2023-07-12 08:18:17,562 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38647 2023-07-12 08:18:17,563 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38647 2023-07-12 08:18:17,566 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38647 2023-07-12 08:18:17,567 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38647 2023-07-12 08:18:17,569 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:17,570 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:17,570 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:17,570 INFO [Listener at localhost/44853] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 08:18:17,571 INFO [Listener at localhost/44853] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:17,571 INFO [Listener at localhost/44853] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:17,571 INFO [Listener at localhost/44853] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:17,572 INFO [Listener at localhost/44853] http.HttpServer(1146): Jetty bound to port 33547 2023-07-12 08:18:17,572 INFO [Listener at localhost/44853] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:17,583 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,583 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7605c194{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:17,584 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,584 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7c1f78c1{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:17,596 INFO [Listener at localhost/44853] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:17,597 INFO [Listener at localhost/44853] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:17,597 INFO [Listener at localhost/44853] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:17,597 INFO [Listener at localhost/44853] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 08:18:17,599 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:17,600 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@470fdab8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:17,601 INFO [Listener at localhost/44853] server.AbstractConnector(333): Started ServerConnector@78d2fac4{HTTP/1.1, (http/1.1)}{0.0.0.0:33547} 2023-07-12 08:18:17,601 INFO [Listener at localhost/44853] server.Server(415): Started @8158ms 2023-07-12 08:18:17,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:17,612 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@3a033f72{HTTP/1.1, (http/1.1)}{0.0.0.0:38143} 2023-07-12 08:18:17,612 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8169ms 2023-07-12 08:18:17,612 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44301,1689149895428 2023-07-12 08:18:17,623 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 08:18:17,625 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44301,1689149895428 2023-07-12 08:18:17,644 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:17,644 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:17,644 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:17,644 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:17,644 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:17,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 08:18:17,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44301,1689149895428 from backup master directory 2023-07-12 08:18:17,648 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 08:18:17,652 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44301,1689149895428 2023-07-12 08:18:17,653 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 08:18:17,653 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:17,653 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44301,1689149895428 2023-07-12 08:18:17,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-12 08:18:17,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-12 08:18:17,781 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/hbase.id with ID: ceaabd00-77c9-4b5d-a071-1c070cc70bed 2023-07-12 08:18:17,837 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:17,858 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:17,937 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2c58bd27 to 127.0.0.1:51057 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:17,975 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b72efea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:18,011 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:18,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 08:18:18,035 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-12 08:18:18,035 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-12 08:18:18,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 08:18:18,049 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 08:18:18,051 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:18,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/data/master/store-tmp 2023-07-12 08:18:18,148 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:18,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 08:18:18,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:18,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:18,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 08:18:18,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:18,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:18,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 08:18:18,150 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/WALs/jenkins-hbase4.apache.org,44301,1689149895428 2023-07-12 08:18:18,174 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44301%2C1689149895428, suffix=, logDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/WALs/jenkins-hbase4.apache.org,44301,1689149895428, archiveDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/oldWALs, maxLogs=10 2023-07-12 08:18:18,232 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK] 2023-07-12 08:18:18,232 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK] 2023-07-12 08:18:18,232 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK] 2023-07-12 08:18:18,241 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 08:18:18,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/WALs/jenkins-hbase4.apache.org,44301,1689149895428/jenkins-hbase4.apache.org%2C44301%2C1689149895428.1689149898185 2023-07-12 08:18:18,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK], DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK], DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK]] 2023-07-12 08:18:18,329 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:18,329 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:18,336 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:18,337 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:18,421 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:18,430 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 08:18:18,470 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 08:18:18,483 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:18,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:18,492 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:18,514 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:18,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:18,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9446457760, jitterRate=-0.1202300637960434}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:18,520 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 08:18:18,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 08:18:18,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 08:18:18,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 08:18:18,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 08:18:18,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-12 08:18:18,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 37 msec 2023-07-12 08:18:18,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 08:18:18,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 08:18:18,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 08:18:18,627 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 08:18:18,635 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 08:18:18,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 08:18:18,646 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:18,648 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 08:18:18,648 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 08:18:18,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 08:18:18,667 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:18,667 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:18,667 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:18,667 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:18,668 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:18,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44301,1689149895428, sessionid=0x101589c725b0000, setting cluster-up flag (Was=false) 2023-07-12 08:18:18,686 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:18,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 08:18:18,692 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44301,1689149895428 2023-07-12 08:18:18,697 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:18,703 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 08:18:18,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44301,1689149895428 2023-07-12 08:18:18,707 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.hbase-snapshot/.tmp 2023-07-12 08:18:18,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 08:18:18,796 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 08:18:18,798 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:18,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 08:18:18,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 08:18:18,806 INFO [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(951): ClusterId : ceaabd00-77c9-4b5d-a071-1c070cc70bed 2023-07-12 08:18:18,806 INFO [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(951): ClusterId : ceaabd00-77c9-4b5d-a071-1c070cc70bed 2023-07-12 08:18:18,806 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(951): ClusterId : ceaabd00-77c9-4b5d-a071-1c070cc70bed 2023-07-12 08:18:18,813 DEBUG [RS:2;jenkins-hbase4:38647] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 08:18:18,813 DEBUG [RS:0;jenkins-hbase4:36999] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 08:18:18,813 DEBUG [RS:1;jenkins-hbase4:42347] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 08:18:18,819 DEBUG [RS:1;jenkins-hbase4:42347] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 08:18:18,819 DEBUG [RS:2;jenkins-hbase4:38647] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 08:18:18,819 DEBUG [RS:0;jenkins-hbase4:36999] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 08:18:18,820 DEBUG [RS:2;jenkins-hbase4:38647] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 08:18:18,820 DEBUG [RS:1;jenkins-hbase4:42347] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 08:18:18,820 DEBUG [RS:0;jenkins-hbase4:36999] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 08:18:18,825 DEBUG [RS:1;jenkins-hbase4:42347] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 08:18:18,825 DEBUG [RS:0;jenkins-hbase4:36999] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 08:18:18,825 DEBUG [RS:2;jenkins-hbase4:38647] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 08:18:18,827 DEBUG [RS:1;jenkins-hbase4:42347] zookeeper.ReadOnlyZKClient(139): Connect 0x257afa1a to 127.0.0.1:51057 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:18,828 DEBUG [RS:2;jenkins-hbase4:38647] zookeeper.ReadOnlyZKClient(139): Connect 0x0b1d0fb0 to 127.0.0.1:51057 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:18,828 DEBUG [RS:0;jenkins-hbase4:36999] zookeeper.ReadOnlyZKClient(139): Connect 0x004452b0 to 127.0.0.1:51057 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:18,835 DEBUG [RS:1;jenkins-hbase4:42347] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2bbd00c5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:18,836 DEBUG [RS:2;jenkins-hbase4:38647] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5daf0a3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:18,836 DEBUG [RS:1;jenkins-hbase4:42347] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f82916a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:18,837 DEBUG [RS:2;jenkins-hbase4:38647] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e564c0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:18,838 DEBUG [RS:0;jenkins-hbase4:36999] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e83acc8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:18,839 DEBUG [RS:0;jenkins-hbase4:36999] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4fbea6f3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:18,874 DEBUG [RS:1;jenkins-hbase4:42347] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:42347 2023-07-12 08:18:18,874 DEBUG [RS:2;jenkins-hbase4:38647] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:38647 2023-07-12 08:18:18,881 DEBUG [RS:0;jenkins-hbase4:36999] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36999 2023-07-12 08:18:18,882 INFO [RS:1;jenkins-hbase4:42347] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 08:18:18,882 INFO [RS:0;jenkins-hbase4:36999] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 08:18:18,882 INFO [RS:0;jenkins-hbase4:36999] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 08:18:18,883 DEBUG [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 08:18:18,882 INFO [RS:2;jenkins-hbase4:38647] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 08:18:18,882 INFO [RS:1;jenkins-hbase4:42347] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 08:18:18,885 INFO [RS:2;jenkins-hbase4:38647] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 08:18:18,885 DEBUG [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 08:18:18,885 DEBUG [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 08:18:18,887 INFO [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44301,1689149895428 with isa=jenkins-hbase4.apache.org/172.31.14.131:38647, startcode=1689149897534 2023-07-12 08:18:18,887 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44301,1689149895428 with isa=jenkins-hbase4.apache.org/172.31.14.131:42347, startcode=1689149897465 2023-07-12 08:18:18,887 INFO [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44301,1689149895428 with isa=jenkins-hbase4.apache.org/172.31.14.131:36999, startcode=1689149897362 2023-07-12 08:18:18,910 DEBUG [RS:2;jenkins-hbase4:38647] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 08:18:18,910 DEBUG [RS:1;jenkins-hbase4:42347] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 08:18:18,910 DEBUG [RS:0;jenkins-hbase4:36999] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 08:18:18,922 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 08:18:18,976 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44041, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 08:18:18,976 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49533, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 08:18:18,976 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51429, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 08:18:18,978 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 08:18:18,984 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:18,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 08:18:18,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 08:18:18,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 08:18:18,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:18,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:18,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:18,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:18,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-12 08:18:18,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:18,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:18,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:18,994 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:18,996 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689149928996 2023-07-12 08:18:18,997 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:19,001 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 08:18:19,003 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 08:18:19,004 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 08:18:19,007 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 08:18:19,013 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:19,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 08:18:19,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 08:18:19,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 08:18:19,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 08:18:19,027 DEBUG [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 08:18:19,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,039 WARN [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 08:18:19,039 DEBUG [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 08:18:19,039 WARN [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 08:18:19,029 DEBUG [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 08:18:19,040 WARN [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 08:18:19,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 08:18:19,045 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 08:18:19,045 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 08:18:19,049 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 08:18:19,049 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 08:18:19,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689149899051,5,FailOnTimeoutGroup] 2023-07-12 08:18:19,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689149899052,5,FailOnTimeoutGroup] 2023-07-12 08:18:19,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 08:18:19,055 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,055 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,123 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:19,125 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:19,125 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9 2023-07-12 08:18:19,142 INFO [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44301,1689149895428 with isa=jenkins-hbase4.apache.org/172.31.14.131:38647, startcode=1689149897534 2023-07-12 08:18:19,142 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44301,1689149895428 with isa=jenkins-hbase4.apache.org/172.31.14.131:42347, startcode=1689149897465 2023-07-12 08:18:19,142 INFO [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44301,1689149895428 with isa=jenkins-hbase4.apache.org/172.31.14.131:36999, startcode=1689149897362 2023-07-12 08:18:19,154 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44301] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:19,155 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:19,156 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 08:18:19,157 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:19,159 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 08:18:19,161 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44301] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:19,161 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:19,161 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 08:18:19,162 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44301] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:19,162 DEBUG [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9 2023-07-12 08:18:19,163 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:19,163 DEBUG [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42813 2023-07-12 08:18:19,163 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 08:18:19,163 DEBUG [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9 2023-07-12 08:18:19,163 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/info 2023-07-12 08:18:19,164 DEBUG [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9 2023-07-12 08:18:19,163 DEBUG [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42813 2023-07-12 08:18:19,163 DEBUG [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39471 2023-07-12 08:18:19,164 DEBUG [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39471 2023-07-12 08:18:19,164 DEBUG [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42813 2023-07-12 08:18:19,164 DEBUG [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39471 2023-07-12 08:18:19,165 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 08:18:19,166 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:19,166 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 08:18:19,169 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/rep_barrier 2023-07-12 08:18:19,170 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 08:18:19,171 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:19,171 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 08:18:19,175 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/table 2023-07-12 08:18:19,175 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:19,175 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 08:18:19,176 DEBUG [RS:2;jenkins-hbase4:38647] zookeeper.ZKUtil(162): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:19,176 DEBUG [RS:0;jenkins-hbase4:36999] zookeeper.ZKUtil(162): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:19,177 DEBUG [RS:1;jenkins-hbase4:42347] zookeeper.ZKUtil(162): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:19,178 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:19,177 WARN [RS:0;jenkins-hbase4:36999] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:19,177 WARN [RS:2;jenkins-hbase4:38647] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:19,183 INFO [RS:0;jenkins-hbase4:36999] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:19,183 WARN [RS:1;jenkins-hbase4:42347] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:19,183 DEBUG [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:19,188 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740 2023-07-12 08:18:19,183 INFO [RS:1;jenkins-hbase4:42347] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:19,183 INFO [RS:2;jenkins-hbase4:38647] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:19,192 DEBUG [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:19,192 DEBUG [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:19,210 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740 2023-07-12 08:18:19,216 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 08:18:19,219 DEBUG [RS:1;jenkins-hbase4:42347] zookeeper.ZKUtil(162): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:19,219 DEBUG [RS:2;jenkins-hbase4:38647] zookeeper.ZKUtil(162): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:19,219 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 08:18:19,219 DEBUG [RS:0;jenkins-hbase4:36999] zookeeper.ZKUtil(162): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:19,219 DEBUG [RS:1;jenkins-hbase4:42347] zookeeper.ZKUtil(162): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:19,219 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36999,1689149897362] 2023-07-12 08:18:19,219 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38647,1689149897534] 2023-07-12 08:18:19,219 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42347,1689149897465] 2023-07-12 08:18:19,219 DEBUG [RS:2;jenkins-hbase4:38647] zookeeper.ZKUtil(162): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:19,220 DEBUG [RS:0;jenkins-hbase4:36999] zookeeper.ZKUtil(162): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:19,220 DEBUG [RS:1;jenkins-hbase4:42347] zookeeper.ZKUtil(162): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:19,220 DEBUG [RS:2;jenkins-hbase4:38647] zookeeper.ZKUtil(162): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:19,248 DEBUG [RS:0;jenkins-hbase4:36999] zookeeper.ZKUtil(162): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:19,255 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:19,256 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10158905760, jitterRate=-0.05387817323207855}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 08:18:19,256 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 08:18:19,256 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 08:18:19,256 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 08:18:19,256 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 08:18:19,256 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 08:18:19,256 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 08:18:19,257 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 08:18:19,257 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 08:18:19,265 DEBUG [RS:1;jenkins-hbase4:42347] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 08:18:19,265 DEBUG [RS:0;jenkins-hbase4:36999] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 08:18:19,265 DEBUG [RS:2;jenkins-hbase4:38647] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 08:18:19,266 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 08:18:19,266 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 08:18:19,276 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 08:18:19,276 INFO [RS:1;jenkins-hbase4:42347] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 08:18:19,277 INFO [RS:2;jenkins-hbase4:38647] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 08:18:19,276 INFO [RS:0;jenkins-hbase4:36999] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 08:18:19,293 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 08:18:19,296 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 08:18:19,301 INFO [RS:1;jenkins-hbase4:42347] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 08:18:19,301 INFO [RS:0;jenkins-hbase4:36999] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 08:18:19,301 INFO [RS:2;jenkins-hbase4:38647] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 08:18:19,306 INFO [RS:1;jenkins-hbase4:42347] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 08:18:19,306 INFO [RS:0;jenkins-hbase4:36999] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 08:18:19,306 INFO [RS:2;jenkins-hbase4:38647] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 08:18:19,307 INFO [RS:0;jenkins-hbase4:36999] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,307 INFO [RS:1;jenkins-hbase4:42347] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,307 INFO [RS:2;jenkins-hbase4:38647] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,308 INFO [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 08:18:19,308 INFO [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 08:18:19,308 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 08:18:19,317 INFO [RS:0;jenkins-hbase4:36999] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,317 INFO [RS:2;jenkins-hbase4:38647] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,317 INFO [RS:1;jenkins-hbase4:42347] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,317 DEBUG [RS:0;jenkins-hbase4:36999] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,317 DEBUG [RS:1;jenkins-hbase4:42347] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,317 DEBUG [RS:2;jenkins-hbase4:38647] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:1;jenkins-hbase4:42347] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:2;jenkins-hbase4:38647] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:1;jenkins-hbase4:42347] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:2;jenkins-hbase4:38647] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:1;jenkins-hbase4:42347] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:2;jenkins-hbase4:38647] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:1;jenkins-hbase4:42347] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:2;jenkins-hbase4:38647] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:1;jenkins-hbase4:42347] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:19,318 DEBUG [RS:2;jenkins-hbase4:38647] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:19,318 DEBUG [RS:1;jenkins-hbase4:42347] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:2;jenkins-hbase4:38647] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,317 DEBUG [RS:0;jenkins-hbase4:36999] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:2;jenkins-hbase4:38647] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:0;jenkins-hbase4:36999] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,318 DEBUG [RS:1;jenkins-hbase4:42347] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,319 DEBUG [RS:0;jenkins-hbase4:36999] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,319 DEBUG [RS:2;jenkins-hbase4:38647] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,319 DEBUG [RS:0;jenkins-hbase4:36999] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,319 DEBUG [RS:2;jenkins-hbase4:38647] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,319 DEBUG [RS:0;jenkins-hbase4:36999] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:19,319 DEBUG [RS:1;jenkins-hbase4:42347] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,319 DEBUG [RS:0;jenkins-hbase4:36999] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,319 DEBUG [RS:1;jenkins-hbase4:42347] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,320 DEBUG [RS:0;jenkins-hbase4:36999] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,320 DEBUG [RS:0;jenkins-hbase4:36999] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,320 DEBUG [RS:0;jenkins-hbase4:36999] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:19,322 INFO [RS:1;jenkins-hbase4:42347] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,322 INFO [RS:2;jenkins-hbase4:38647] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,322 INFO [RS:1;jenkins-hbase4:42347] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,322 INFO [RS:2;jenkins-hbase4:38647] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,322 INFO [RS:1;jenkins-hbase4:42347] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,322 INFO [RS:0;jenkins-hbase4:36999] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,323 INFO [RS:0;jenkins-hbase4:36999] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,323 INFO [RS:0;jenkins-hbase4:36999] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,322 INFO [RS:2;jenkins-hbase4:38647] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,345 INFO [RS:1;jenkins-hbase4:42347] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 08:18:19,347 INFO [RS:0;jenkins-hbase4:36999] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 08:18:19,345 INFO [RS:2;jenkins-hbase4:38647] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 08:18:19,355 INFO [RS:0;jenkins-hbase4:36999] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36999,1689149897362-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,355 INFO [RS:2;jenkins-hbase4:38647] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38647,1689149897534-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,350 INFO [RS:1;jenkins-hbase4:42347] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42347,1689149897465-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,379 INFO [RS:2;jenkins-hbase4:38647] regionserver.Replication(203): jenkins-hbase4.apache.org,38647,1689149897534 started 2023-07-12 08:18:19,379 INFO [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38647,1689149897534, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38647, sessionid=0x101589c725b0003 2023-07-12 08:18:19,382 INFO [RS:0;jenkins-hbase4:36999] regionserver.Replication(203): jenkins-hbase4.apache.org,36999,1689149897362 started 2023-07-12 08:18:19,383 DEBUG [RS:2;jenkins-hbase4:38647] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 08:18:19,383 INFO [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36999,1689149897362, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36999, sessionid=0x101589c725b0001 2023-07-12 08:18:19,383 INFO [RS:1;jenkins-hbase4:42347] regionserver.Replication(203): jenkins-hbase4.apache.org,42347,1689149897465 started 2023-07-12 08:18:19,383 DEBUG [RS:0;jenkins-hbase4:36999] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 08:18:19,383 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42347,1689149897465, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42347, sessionid=0x101589c725b0002 2023-07-12 08:18:19,383 DEBUG [RS:2;jenkins-hbase4:38647] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:19,384 DEBUG [RS:1;jenkins-hbase4:42347] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 08:18:19,384 DEBUG [RS:2;jenkins-hbase4:38647] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38647,1689149897534' 2023-07-12 08:18:19,383 DEBUG [RS:0;jenkins-hbase4:36999] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:19,384 DEBUG [RS:2;jenkins-hbase4:38647] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 08:18:19,384 DEBUG [RS:0;jenkins-hbase4:36999] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36999,1689149897362' 2023-07-12 08:18:19,384 DEBUG [RS:1;jenkins-hbase4:42347] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:19,385 DEBUG [RS:1;jenkins-hbase4:42347] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42347,1689149897465' 2023-07-12 08:18:19,385 DEBUG [RS:1;jenkins-hbase4:42347] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 08:18:19,385 DEBUG [RS:0;jenkins-hbase4:36999] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 08:18:19,385 DEBUG [RS:2;jenkins-hbase4:38647] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 08:18:19,385 DEBUG [RS:1;jenkins-hbase4:42347] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 08:18:19,385 DEBUG [RS:0;jenkins-hbase4:36999] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 08:18:19,386 DEBUG [RS:2;jenkins-hbase4:38647] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 08:18:19,386 DEBUG [RS:2;jenkins-hbase4:38647] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 08:18:19,386 DEBUG [RS:2;jenkins-hbase4:38647] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:19,386 DEBUG [RS:2;jenkins-hbase4:38647] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38647,1689149897534' 2023-07-12 08:18:19,386 DEBUG [RS:1;jenkins-hbase4:42347] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 08:18:19,386 DEBUG [RS:0;jenkins-hbase4:36999] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 08:18:19,386 DEBUG [RS:1;jenkins-hbase4:42347] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 08:18:19,386 DEBUG [RS:2;jenkins-hbase4:38647] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 08:18:19,386 DEBUG [RS:1;jenkins-hbase4:42347] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:19,387 DEBUG [RS:1;jenkins-hbase4:42347] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42347,1689149897465' 2023-07-12 08:18:19,387 DEBUG [RS:1;jenkins-hbase4:42347] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 08:18:19,386 DEBUG [RS:0;jenkins-hbase4:36999] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 08:18:19,387 DEBUG [RS:0;jenkins-hbase4:36999] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:19,387 DEBUG [RS:0;jenkins-hbase4:36999] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36999,1689149897362' 2023-07-12 08:18:19,387 DEBUG [RS:0;jenkins-hbase4:36999] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 08:18:19,387 DEBUG [RS:2;jenkins-hbase4:38647] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 08:18:19,387 DEBUG [RS:1;jenkins-hbase4:42347] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 08:18:19,388 DEBUG [RS:2;jenkins-hbase4:38647] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 08:18:19,388 DEBUG [RS:0;jenkins-hbase4:36999] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 08:18:19,388 INFO [RS:2;jenkins-hbase4:38647] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 08:18:19,388 DEBUG [RS:1;jenkins-hbase4:42347] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 08:18:19,388 INFO [RS:2;jenkins-hbase4:38647] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 08:18:19,388 INFO [RS:1;jenkins-hbase4:42347] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 08:18:19,389 INFO [RS:1;jenkins-hbase4:42347] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 08:18:19,390 DEBUG [RS:0;jenkins-hbase4:36999] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 08:18:19,390 INFO [RS:0;jenkins-hbase4:36999] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 08:18:19,390 INFO [RS:0;jenkins-hbase4:36999] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 08:18:19,448 DEBUG [jenkins-hbase4:44301] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 08:18:19,466 DEBUG [jenkins-hbase4:44301] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:19,467 DEBUG [jenkins-hbase4:44301] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:19,467 DEBUG [jenkins-hbase4:44301] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:19,467 DEBUG [jenkins-hbase4:44301] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:19,467 DEBUG [jenkins-hbase4:44301] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:19,472 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42347,1689149897465, state=OPENING 2023-07-12 08:18:19,483 DEBUG [PEWorker-5] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 08:18:19,485 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:19,486 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 08:18:19,490 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:19,505 INFO [RS:0;jenkins-hbase4:36999] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36999%2C1689149897362, suffix=, logDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,36999,1689149897362, archiveDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/oldWALs, maxLogs=32 2023-07-12 08:18:19,506 INFO [RS:2;jenkins-hbase4:38647] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38647%2C1689149897534, suffix=, logDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,38647,1689149897534, archiveDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/oldWALs, maxLogs=32 2023-07-12 08:18:19,507 INFO [RS:1;jenkins-hbase4:42347] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42347%2C1689149897465, suffix=, logDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,42347,1689149897465, archiveDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/oldWALs, maxLogs=32 2023-07-12 08:18:19,534 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK] 2023-07-12 08:18:19,534 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK] 2023-07-12 08:18:19,534 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK] 2023-07-12 08:18:19,560 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK] 2023-07-12 08:18:19,560 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK] 2023-07-12 08:18:19,560 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK] 2023-07-12 08:18:19,561 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK] 2023-07-12 08:18:19,561 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK] 2023-07-12 08:18:19,561 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK] 2023-07-12 08:18:19,566 INFO [RS:2;jenkins-hbase4:38647] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,38647,1689149897534/jenkins-hbase4.apache.org%2C38647%2C1689149897534.1689149899509 2023-07-12 08:18:19,575 DEBUG [RS:2;jenkins-hbase4:38647] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK], DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK], DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK]] 2023-07-12 08:18:19,577 INFO [RS:0;jenkins-hbase4:36999] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,36999,1689149897362/jenkins-hbase4.apache.org%2C36999%2C1689149897362.1689149899509 2023-07-12 08:18:19,577 INFO [RS:1;jenkins-hbase4:42347] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,42347,1689149897465/jenkins-hbase4.apache.org%2C42347%2C1689149897465.1689149899509 2023-07-12 08:18:19,578 DEBUG [RS:0;jenkins-hbase4:36999] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK], DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK], DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK]] 2023-07-12 08:18:19,580 WARN [ReadOnlyZKClient-127.0.0.1:51057@0x2c58bd27] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 08:18:19,584 DEBUG [RS:1;jenkins-hbase4:42347] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK], DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK], DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK]] 2023-07-12 08:18:19,613 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44301,1689149895428] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:19,617 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34408, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:19,618 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42347] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:34408 deadline: 1689149959617, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:19,677 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:19,681 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:19,689 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34424, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:19,705 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 08:18:19,706 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:19,709 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42347%2C1689149897465.meta, suffix=.meta, logDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,42347,1689149897465, archiveDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/oldWALs, maxLogs=32 2023-07-12 08:18:19,727 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK] 2023-07-12 08:18:19,727 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK] 2023-07-12 08:18:19,729 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK] 2023-07-12 08:18:19,733 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,42347,1689149897465/jenkins-hbase4.apache.org%2C42347%2C1689149897465.meta.1689149899710.meta 2023-07-12 08:18:19,734 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK], DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK], DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK]] 2023-07-12 08:18:19,735 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:19,736 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 08:18:19,739 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 08:18:19,741 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 08:18:19,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 08:18:19,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:19,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 08:18:19,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 08:18:19,751 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 08:18:19,753 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/info 2023-07-12 08:18:19,753 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/info 2023-07-12 08:18:19,753 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 08:18:19,754 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:19,754 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 08:18:19,756 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/rep_barrier 2023-07-12 08:18:19,756 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/rep_barrier 2023-07-12 08:18:19,757 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 08:18:19,757 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:19,758 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 08:18:19,759 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/table 2023-07-12 08:18:19,759 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/table 2023-07-12 08:18:19,760 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 08:18:19,760 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:19,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740 2023-07-12 08:18:19,764 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740 2023-07-12 08:18:19,768 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 08:18:19,770 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 08:18:19,771 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10663438240, jitterRate=-0.00688992440700531}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 08:18:19,771 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 08:18:19,782 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689149899673 2023-07-12 08:18:19,801 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 08:18:19,802 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 08:18:19,803 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42347,1689149897465, state=OPEN 2023-07-12 08:18:19,807 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 08:18:19,807 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 08:18:19,814 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 08:18:19,814 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42347,1689149897465 in 317 msec 2023-07-12 08:18:19,823 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 08:18:19,823 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 540 msec 2023-07-12 08:18:19,829 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0180 sec 2023-07-12 08:18:19,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689149899829, completionTime=-1 2023-07-12 08:18:19,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 08:18:19,829 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 08:18:19,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 08:18:19,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689149959882 2023-07-12 08:18:19,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689150019882 2023-07-12 08:18:19,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 52 msec 2023-07-12 08:18:19,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44301,1689149895428-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44301,1689149895428-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44301,1689149895428-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44301, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:19,920 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 08:18:19,937 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 08:18:19,939 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:19,950 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 08:18:19,953 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:19,956 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:19,973 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:19,976 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2 empty. 2023-07-12 08:18:19,977 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:19,977 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 08:18:20,018 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:20,022 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => ae71929909c3f585c1f0e7f3408f83d2, NAME => 'hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:20,044 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:20,044 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing ae71929909c3f585c1f0e7f3408f83d2, disabling compactions & flushes 2023-07-12 08:18:20,044 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:20,044 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:20,044 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. after waiting 0 ms 2023-07-12 08:18:20,044 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:20,044 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:20,044 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for ae71929909c3f585c1f0e7f3408f83d2: 2023-07-12 08:18:20,049 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:20,066 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149900052"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149900052"}]},"ts":"1689149900052"} 2023-07-12 08:18:20,097 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:20,099 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:20,105 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149900100"}]},"ts":"1689149900100"} 2023-07-12 08:18:20,110 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 08:18:20,115 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:20,115 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:20,115 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:20,115 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:20,115 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:20,117 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, ASSIGN}] 2023-07-12 08:18:20,120 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, ASSIGN 2023-07-12 08:18:20,122 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38647,1689149897534; forceNewPlan=false, retain=false 2023-07-12 08:18:20,135 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44301,1689149895428] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:20,141 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44301,1689149895428] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 08:18:20,145 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:20,149 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:20,167 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:20,168 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184 empty. 2023-07-12 08:18:20,169 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:20,169 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 08:18:20,230 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:20,232 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => e819f13729c8274f2f0efb5a42e75184, NAME => 'hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:20,254 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:20,254 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing e819f13729c8274f2f0efb5a42e75184, disabling compactions & flushes 2023-07-12 08:18:20,254 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:20,254 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:20,254 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. after waiting 0 ms 2023-07-12 08:18:20,254 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:20,254 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:20,254 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for e819f13729c8274f2f0efb5a42e75184: 2023-07-12 08:18:20,258 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:20,260 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149900259"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149900259"}]},"ts":"1689149900259"} 2023-07-12 08:18:20,263 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:20,265 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:20,265 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149900265"}]},"ts":"1689149900265"} 2023-07-12 08:18:20,271 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 08:18:20,273 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 08:18:20,275 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ae71929909c3f585c1f0e7f3408f83d2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:20,275 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149900274"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149900274"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149900274"}]},"ts":"1689149900274"} 2023-07-12 08:18:20,277 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:20,277 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:20,277 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:20,277 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:20,277 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:20,277 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, ASSIGN}] 2023-07-12 08:18:20,281 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure ae71929909c3f585c1f0e7f3408f83d2, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:20,282 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, ASSIGN 2023-07-12 08:18:20,284 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38647,1689149897534; forceNewPlan=false, retain=false 2023-07-12 08:18:20,435 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 08:18:20,436 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e819f13729c8274f2f0efb5a42e75184, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:20,437 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149900436"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149900436"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149900436"}]},"ts":"1689149900436"} 2023-07-12 08:18:20,463 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:20,463 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:20,468 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33510, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:20,470 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure e819f13729c8274f2f0efb5a42e75184, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:20,495 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:20,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ae71929909c3f585c1f0e7f3408f83d2, NAME => 'hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:20,497 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:20,497 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:20,497 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:20,497 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:20,510 INFO [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:20,513 DEBUG [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/info 2023-07-12 08:18:20,513 DEBUG [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/info 2023-07-12 08:18:20,514 INFO [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ae71929909c3f585c1f0e7f3408f83d2 columnFamilyName info 2023-07-12 08:18:20,518 INFO [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] regionserver.HStore(310): Store=ae71929909c3f585c1f0e7f3408f83d2/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:20,522 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:20,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:20,529 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:20,534 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:20,535 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ae71929909c3f585c1f0e7f3408f83d2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11496301120, jitterRate=0.07067647576332092}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:20,535 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ae71929909c3f585c1f0e7f3408f83d2: 2023-07-12 08:18:20,539 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2., pid=8, masterSystemTime=1689149900463 2023-07-12 08:18:20,546 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:20,546 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:20,548 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ae71929909c3f585c1f0e7f3408f83d2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:20,549 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149900548"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149900548"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149900548"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149900548"}]},"ts":"1689149900548"} 2023-07-12 08:18:20,556 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-12 08:18:20,556 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure ae71929909c3f585c1f0e7f3408f83d2, server=jenkins-hbase4.apache.org,38647,1689149897534 in 271 msec 2023-07-12 08:18:20,560 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-12 08:18:20,561 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, ASSIGN in 439 msec 2023-07-12 08:18:20,564 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:20,564 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149900564"}]},"ts":"1689149900564"} 2023-07-12 08:18:20,567 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 08:18:20,570 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:20,573 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 630 msec 2023-07-12 08:18:20,630 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:20,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e819f13729c8274f2f0efb5a42e75184, NAME => 'hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:20,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 08:18:20,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. service=MultiRowMutationService 2023-07-12 08:18:20,633 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 08:18:20,633 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:20,633 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:20,633 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:20,633 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:20,638 INFO [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:20,642 DEBUG [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m 2023-07-12 08:18:20,642 DEBUG [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m 2023-07-12 08:18:20,645 INFO [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e819f13729c8274f2f0efb5a42e75184 columnFamilyName m 2023-07-12 08:18:20,646 INFO [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] regionserver.HStore(310): Store=e819f13729c8274f2f0efb5a42e75184/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:20,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:20,649 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:20,653 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 08:18:20,654 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:20,656 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:20,656 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:20,664 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:20,664 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e819f13729c8274f2f0efb5a42e75184; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2a9c98f, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:20,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e819f13729c8274f2f0efb5a42e75184: 2023-07-12 08:18:20,666 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184., pid=9, masterSystemTime=1689149900624 2023-07-12 08:18:20,671 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:20,671 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:20,672 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e819f13729c8274f2f0efb5a42e75184, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:20,672 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149900672"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149900672"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149900672"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149900672"}]},"ts":"1689149900672"} 2023-07-12 08:18:20,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:20,688 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 08:18:20,688 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33522, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:20,689 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure e819f13729c8274f2f0efb5a42e75184, server=jenkins-hbase4.apache.org,38647,1689149897534 in 206 msec 2023-07-12 08:18:20,695 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-12 08:18:20,695 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, ASSIGN in 411 msec 2023-07-12 08:18:20,697 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:20,697 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149900697"}]},"ts":"1689149900697"} 2023-07-12 08:18:20,700 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 08:18:20,704 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:20,708 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 569 msec 2023-07-12 08:18:20,713 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 08:18:20,732 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:20,750 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 08:18:20,750 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 08:18:20,751 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 53 msec 2023-07-12 08:18:20,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 08:18:20,780 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:20,787 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 24 msec 2023-07-12 08:18:20,801 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 08:18:20,807 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 08:18:20,808 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.153sec 2023-07-12 08:18:20,812 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 08:18:20,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 08:18:20,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 08:18:20,816 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44301,1689149895428-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 08:18:20,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44301,1689149895428-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 08:18:20,820 DEBUG [Listener at localhost/44853] zookeeper.ReadOnlyZKClient(139): Connect 0x62c69654 to 127.0.0.1:51057 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:20,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 08:18:20,836 DEBUG [Listener at localhost/44853] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c6d4550, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:20,858 DEBUG [hconnection-0x2c378da6-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:20,858 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:20,858 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:20,861 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 08:18:20,868 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 08:18:20,874 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33998, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:20,885 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44301,1689149895428 2023-07-12 08:18:20,886 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:20,896 DEBUG [Listener at localhost/44853] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 08:18:20,900 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58548, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 08:18:20,917 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 08:18:20,917 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:20,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-12 08:18:20,931 DEBUG [Listener at localhost/44853] zookeeper.ReadOnlyZKClient(139): Connect 0x3604583d to 127.0.0.1:51057 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:20,947 DEBUG [Listener at localhost/44853] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78ef668c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:20,948 INFO [Listener at localhost/44853] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:51057 2023-07-12 08:18:20,958 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:20,967 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101589c725b000a connected 2023-07-12 08:18:20,997 INFO [Listener at localhost/44853] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=419, OpenFileDescriptor=673, MaxFileDescriptor=60000, SystemLoadAverage=571, ProcessCount=173, AvailableMemoryMB=4613 2023-07-12 08:18:21,002 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-12 08:18:21,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:21,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:21,093 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 08:18:21,106 INFO [Listener at localhost/44853] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:21,107 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:21,107 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:21,107 INFO [Listener at localhost/44853] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:21,107 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:21,107 INFO [Listener at localhost/44853] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:21,107 INFO [Listener at localhost/44853] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:21,112 INFO [Listener at localhost/44853] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41817 2023-07-12 08:18:21,113 INFO [Listener at localhost/44853] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 08:18:21,117 DEBUG [Listener at localhost/44853] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 08:18:21,119 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:21,120 INFO [Listener at localhost/44853] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:21,122 INFO [Listener at localhost/44853] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41817 connecting to ZooKeeper ensemble=127.0.0.1:51057 2023-07-12 08:18:21,127 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:418170x0, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:21,129 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41817-0x101589c725b000b connected 2023-07-12 08:18:21,130 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(162): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 08:18:21,131 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(162): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 08:18:21,132 DEBUG [Listener at localhost/44853] zookeeper.ZKUtil(164): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:21,136 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41817 2023-07-12 08:18:21,137 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41817 2023-07-12 08:18:21,139 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41817 2023-07-12 08:18:21,141 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41817 2023-07-12 08:18:21,142 DEBUG [Listener at localhost/44853] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41817 2023-07-12 08:18:21,145 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:21,145 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:21,145 INFO [Listener at localhost/44853] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:21,145 INFO [Listener at localhost/44853] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 08:18:21,146 INFO [Listener at localhost/44853] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:21,146 INFO [Listener at localhost/44853] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:21,146 INFO [Listener at localhost/44853] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:21,146 INFO [Listener at localhost/44853] http.HttpServer(1146): Jetty bound to port 38153 2023-07-12 08:18:21,147 INFO [Listener at localhost/44853] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:21,157 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:21,158 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1755bd06{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:21,159 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:21,159 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b33dcd2{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:21,170 INFO [Listener at localhost/44853] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:21,173 INFO [Listener at localhost/44853] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:21,173 INFO [Listener at localhost/44853] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:21,174 INFO [Listener at localhost/44853] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 08:18:21,175 INFO [Listener at localhost/44853] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:21,176 INFO [Listener at localhost/44853] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@54c35b24{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:21,178 INFO [Listener at localhost/44853] server.AbstractConnector(333): Started ServerConnector@7adb5e78{HTTP/1.1, (http/1.1)}{0.0.0.0:38153} 2023-07-12 08:18:21,178 INFO [Listener at localhost/44853] server.Server(415): Started @11735ms 2023-07-12 08:18:21,182 INFO [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(951): ClusterId : ceaabd00-77c9-4b5d-a071-1c070cc70bed 2023-07-12 08:18:21,183 DEBUG [RS:3;jenkins-hbase4:41817] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 08:18:21,187 DEBUG [RS:3;jenkins-hbase4:41817] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 08:18:21,187 DEBUG [RS:3;jenkins-hbase4:41817] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 08:18:21,189 DEBUG [RS:3;jenkins-hbase4:41817] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 08:18:21,191 DEBUG [RS:3;jenkins-hbase4:41817] zookeeper.ReadOnlyZKClient(139): Connect 0x00af7513 to 127.0.0.1:51057 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:21,203 DEBUG [RS:3;jenkins-hbase4:41817] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e688b76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:21,204 DEBUG [RS:3;jenkins-hbase4:41817] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a571e63, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:21,216 DEBUG [RS:3;jenkins-hbase4:41817] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:41817 2023-07-12 08:18:21,216 INFO [RS:3;jenkins-hbase4:41817] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 08:18:21,216 INFO [RS:3;jenkins-hbase4:41817] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 08:18:21,217 DEBUG [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 08:18:21,217 INFO [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44301,1689149895428 with isa=jenkins-hbase4.apache.org/172.31.14.131:41817, startcode=1689149901106 2023-07-12 08:18:21,218 DEBUG [RS:3;jenkins-hbase4:41817] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 08:18:21,222 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46025, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 08:18:21,223 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44301] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:21,223 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:21,223 DEBUG [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9 2023-07-12 08:18:21,224 DEBUG [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42813 2023-07-12 08:18:21,224 DEBUG [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39471 2023-07-12 08:18:21,230 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:21,231 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:21,230 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:21,231 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:21,231 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:21,232 DEBUG [RS:3;jenkins-hbase4:41817] zookeeper.ZKUtil(162): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:21,232 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 08:18:21,232 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:21,232 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41817,1689149901106] 2023-07-12 08:18:21,232 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:21,232 WARN [RS:3;jenkins-hbase4:41817] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:21,238 INFO [RS:3;jenkins-hbase4:41817] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:21,238 DEBUG [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:21,238 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:21,240 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:21,240 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:21,240 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44301,1689149895428] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 08:18:21,242 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:21,242 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:21,244 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:21,245 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:21,245 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:21,247 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:21,248 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:21,250 DEBUG [RS:3;jenkins-hbase4:41817] zookeeper.ZKUtil(162): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:21,250 DEBUG [RS:3;jenkins-hbase4:41817] zookeeper.ZKUtil(162): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:21,251 DEBUG [RS:3;jenkins-hbase4:41817] zookeeper.ZKUtil(162): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:21,252 DEBUG [RS:3;jenkins-hbase4:41817] zookeeper.ZKUtil(162): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:21,253 DEBUG [RS:3;jenkins-hbase4:41817] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 08:18:21,254 INFO [RS:3;jenkins-hbase4:41817] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 08:18:21,259 INFO [RS:3;jenkins-hbase4:41817] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 08:18:21,261 INFO [RS:3;jenkins-hbase4:41817] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 08:18:21,261 INFO [RS:3;jenkins-hbase4:41817] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:21,262 INFO [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 08:18:21,264 INFO [RS:3;jenkins-hbase4:41817] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:21,264 DEBUG [RS:3;jenkins-hbase4:41817] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:21,264 DEBUG [RS:3;jenkins-hbase4:41817] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:21,265 DEBUG [RS:3;jenkins-hbase4:41817] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:21,265 DEBUG [RS:3;jenkins-hbase4:41817] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:21,265 DEBUG [RS:3;jenkins-hbase4:41817] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:21,265 DEBUG [RS:3;jenkins-hbase4:41817] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:21,265 DEBUG [RS:3;jenkins-hbase4:41817] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:21,265 DEBUG [RS:3;jenkins-hbase4:41817] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:21,265 DEBUG [RS:3;jenkins-hbase4:41817] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:21,265 DEBUG [RS:3;jenkins-hbase4:41817] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:21,269 INFO [RS:3;jenkins-hbase4:41817] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:21,269 INFO [RS:3;jenkins-hbase4:41817] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:21,269 INFO [RS:3;jenkins-hbase4:41817] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:21,288 INFO [RS:3;jenkins-hbase4:41817] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 08:18:21,288 INFO [RS:3;jenkins-hbase4:41817] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41817,1689149901106-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:21,304 INFO [RS:3;jenkins-hbase4:41817] regionserver.Replication(203): jenkins-hbase4.apache.org,41817,1689149901106 started 2023-07-12 08:18:21,304 INFO [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41817,1689149901106, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41817, sessionid=0x101589c725b000b 2023-07-12 08:18:21,304 DEBUG [RS:3;jenkins-hbase4:41817] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 08:18:21,304 DEBUG [RS:3;jenkins-hbase4:41817] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:21,304 DEBUG [RS:3;jenkins-hbase4:41817] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41817,1689149901106' 2023-07-12 08:18:21,304 DEBUG [RS:3;jenkins-hbase4:41817] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 08:18:21,305 DEBUG [RS:3;jenkins-hbase4:41817] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 08:18:21,306 DEBUG [RS:3;jenkins-hbase4:41817] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 08:18:21,306 DEBUG [RS:3;jenkins-hbase4:41817] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 08:18:21,306 DEBUG [RS:3;jenkins-hbase4:41817] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:21,306 DEBUG [RS:3;jenkins-hbase4:41817] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41817,1689149901106' 2023-07-12 08:18:21,306 DEBUG [RS:3;jenkins-hbase4:41817] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 08:18:21,307 DEBUG [RS:3;jenkins-hbase4:41817] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 08:18:21,307 DEBUG [RS:3;jenkins-hbase4:41817] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 08:18:21,307 INFO [RS:3;jenkins-hbase4:41817] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 08:18:21,307 INFO [RS:3;jenkins-hbase4:41817] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 08:18:21,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:21,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:21,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:21,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:21,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:21,339 DEBUG [hconnection-0x62be270e-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:21,350 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34006, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:21,375 DEBUG [hconnection-0x62be270e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:21,383 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33532, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:21,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:21,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:21,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:21,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:21,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:58548 deadline: 1689151101407, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:21,411 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:21,414 INFO [RS:3;jenkins-hbase4:41817] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41817%2C1689149901106, suffix=, logDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,41817,1689149901106, archiveDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/oldWALs, maxLogs=32 2023-07-12 08:18:21,429 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:21,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:21,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:21,433 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:21,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:21,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:21,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:21,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:21,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:21,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:21,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:21,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:21,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:21,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:21,496 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK] 2023-07-12 08:18:21,507 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK] 2023-07-12 08:18:21,508 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK] 2023-07-12 08:18:21,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:21,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:21,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:36999] to rsgroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:21,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:21,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:21,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:21,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:21,525 INFO [RS:3;jenkins-hbase4:41817] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,41817,1689149901106/jenkins-hbase4.apache.org%2C41817%2C1689149901106.1689149901417 2023-07-12 08:18:21,526 DEBUG [RS:3;jenkins-hbase4:41817] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK], DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK], DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK]] 2023-07-12 08:18:21,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(238): Moving server region ae71929909c3f585c1f0e7f3408f83d2, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:21,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, REOPEN/MOVE 2023-07-12 08:18:21,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(238): Moving server region e819f13729c8274f2f0efb5a42e75184, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:21,538 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, REOPEN/MOVE 2023-07-12 08:18:21,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, REOPEN/MOVE 2023-07-12 08:18:21,542 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ae71929909c3f585c1f0e7f3408f83d2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:21,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-12 08:18:21,543 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149901542"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149901542"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149901542"}]},"ts":"1689149901542"} 2023-07-12 08:18:21,543 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, REOPEN/MOVE 2023-07-12 08:18:21,545 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e819f13729c8274f2f0efb5a42e75184, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:21,546 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149901545"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149901545"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149901545"}]},"ts":"1689149901545"} 2023-07-12 08:18:21,552 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure ae71929909c3f585c1f0e7f3408f83d2, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:21,555 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure e819f13729c8274f2f0efb5a42e75184, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:21,719 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:21,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ae71929909c3f585c1f0e7f3408f83d2, disabling compactions & flushes 2023-07-12 08:18:21,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:21,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:21,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. after waiting 0 ms 2023-07-12 08:18:21,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:21,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ae71929909c3f585c1f0e7f3408f83d2 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-12 08:18:21,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/.tmp/info/57474fb50e604a99bb4c089b35db0e64 2023-07-12 08:18:21,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/.tmp/info/57474fb50e604a99bb4c089b35db0e64 as hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/info/57474fb50e604a99bb4c089b35db0e64 2023-07-12 08:18:21,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/info/57474fb50e604a99bb4c089b35db0e64, entries=2, sequenceid=6, filesize=4.8 K 2023-07-12 08:18:21,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for ae71929909c3f585c1f0e7f3408f83d2 in 200ms, sequenceid=6, compaction requested=false 2023-07-12 08:18:21,923 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 08:18:21,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-12 08:18:21,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:21,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ae71929909c3f585c1f0e7f3408f83d2: 2023-07-12 08:18:21,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ae71929909c3f585c1f0e7f3408f83d2 move to jenkins-hbase4.apache.org,41817,1689149901106 record at close sequenceid=6 2023-07-12 08:18:21,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:21,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:21,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e819f13729c8274f2f0efb5a42e75184, disabling compactions & flushes 2023-07-12 08:18:21,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:21,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:21,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. after waiting 0 ms 2023-07-12 08:18:21,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:21,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e819f13729c8274f2f0efb5a42e75184 1/1 column families, dataSize=1.38 KB heapSize=2.36 KB 2023-07-12 08:18:21,956 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ae71929909c3f585c1f0e7f3408f83d2, regionState=CLOSED 2023-07-12 08:18:21,956 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149901956"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149901956"}]},"ts":"1689149901956"} 2023-07-12 08:18:21,967 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-12 08:18:21,967 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure ae71929909c3f585c1f0e7f3408f83d2, server=jenkins-hbase4.apache.org,38647,1689149897534 in 409 msec 2023-07-12 08:18:21,969 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41817,1689149901106; forceNewPlan=false, retain=false 2023-07-12 08:18:21,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/.tmp/m/1f73483a707e46ffb81452ad99f50cbc 2023-07-12 08:18:22,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/.tmp/m/1f73483a707e46ffb81452ad99f50cbc as hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m/1f73483a707e46ffb81452ad99f50cbc 2023-07-12 08:18:22,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m/1f73483a707e46ffb81452ad99f50cbc, entries=3, sequenceid=9, filesize=5.2 K 2023-07-12 08:18:22,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1414, heapSize ~2.34 KB/2400, currentSize=0 B/0 for e819f13729c8274f2f0efb5a42e75184 in 72ms, sequenceid=9, compaction requested=false 2023-07-12 08:18:22,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 08:18:22,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 08:18:22,045 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 08:18:22,045 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:22,045 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e819f13729c8274f2f0efb5a42e75184: 2023-07-12 08:18:22,046 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e819f13729c8274f2f0efb5a42e75184 move to jenkins-hbase4.apache.org,41817,1689149901106 record at close sequenceid=9 2023-07-12 08:18:22,048 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:22,053 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e819f13729c8274f2f0efb5a42e75184, regionState=CLOSED 2023-07-12 08:18:22,054 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149902053"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149902053"}]},"ts":"1689149902053"} 2023-07-12 08:18:22,060 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-12 08:18:22,060 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure e819f13729c8274f2f0efb5a42e75184, server=jenkins-hbase4.apache.org,38647,1689149897534 in 501 msec 2023-07-12 08:18:22,061 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41817,1689149901106; forceNewPlan=false, retain=false 2023-07-12 08:18:22,061 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 08:18:22,062 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ae71929909c3f585c1f0e7f3408f83d2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:22,062 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149902062"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149902062"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149902062"}]},"ts":"1689149902062"} 2023-07-12 08:18:22,063 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e819f13729c8274f2f0efb5a42e75184, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:22,063 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149902063"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149902063"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149902063"}]},"ts":"1689149902063"} 2023-07-12 08:18:22,067 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=12, state=RUNNABLE; OpenRegionProcedure ae71929909c3f585c1f0e7f3408f83d2, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:22,070 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=13, state=RUNNABLE; OpenRegionProcedure e819f13729c8274f2f0efb5a42e75184, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:22,225 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:22,226 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:22,230 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60842, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:22,236 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:22,236 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ae71929909c3f585c1f0e7f3408f83d2, NAME => 'hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:22,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:22,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:22,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:22,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:22,241 INFO [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:22,243 DEBUG [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/info 2023-07-12 08:18:22,243 DEBUG [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/info 2023-07-12 08:18:22,243 INFO [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ae71929909c3f585c1f0e7f3408f83d2 columnFamilyName info 2023-07-12 08:18:22,260 DEBUG [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] regionserver.HStore(539): loaded hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/info/57474fb50e604a99bb4c089b35db0e64 2023-07-12 08:18:22,261 INFO [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] regionserver.HStore(310): Store=ae71929909c3f585c1f0e7f3408f83d2/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:22,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:22,265 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:22,270 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:22,272 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ae71929909c3f585c1f0e7f3408f83d2; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10907197760, jitterRate=0.015811949968338013}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:22,272 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ae71929909c3f585c1f0e7f3408f83d2: 2023-07-12 08:18:22,273 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2., pid=16, masterSystemTime=1689149902225 2023-07-12 08:18:22,277 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:22,277 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:22,278 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:22,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e819f13729c8274f2f0efb5a42e75184, NAME => 'hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:22,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 08:18:22,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. service=MultiRowMutationService 2023-07-12 08:18:22,278 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 08:18:22,279 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:22,279 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ae71929909c3f585c1f0e7f3408f83d2, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:22,279 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:22,279 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:22,279 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:22,279 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149902278"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149902278"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149902278"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149902278"}]},"ts":"1689149902278"} 2023-07-12 08:18:22,288 INFO [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:22,290 DEBUG [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m 2023-07-12 08:18:22,291 DEBUG [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m 2023-07-12 08:18:22,291 INFO [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e819f13729c8274f2f0efb5a42e75184 columnFamilyName m 2023-07-12 08:18:22,291 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-07-12 08:18:22,293 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; OpenRegionProcedure ae71929909c3f585c1f0e7f3408f83d2, server=jenkins-hbase4.apache.org,41817,1689149901106 in 221 msec 2023-07-12 08:18:22,302 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, REOPEN/MOVE in 758 msec 2023-07-12 08:18:22,309 DEBUG [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] regionserver.HStore(539): loaded hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m/1f73483a707e46ffb81452ad99f50cbc 2023-07-12 08:18:22,309 INFO [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] regionserver.HStore(310): Store=e819f13729c8274f2f0efb5a42e75184/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:22,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:22,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:22,317 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:22,319 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e819f13729c8274f2f0efb5a42e75184; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@9207ff, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:22,319 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e819f13729c8274f2f0efb5a42e75184: 2023-07-12 08:18:22,320 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184., pid=17, masterSystemTime=1689149902225 2023-07-12 08:18:22,323 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:22,323 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:22,324 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e819f13729c8274f2f0efb5a42e75184, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:22,324 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149902324"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149902324"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149902324"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149902324"}]},"ts":"1689149902324"} 2023-07-12 08:18:22,331 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=13 2023-07-12 08:18:22,331 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=13, state=SUCCESS; OpenRegionProcedure e819f13729c8274f2f0efb5a42e75184, server=jenkins-hbase4.apache.org,41817,1689149901106 in 257 msec 2023-07-12 08:18:22,333 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, REOPEN/MOVE in 794 msec 2023-07-12 08:18:22,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-12 08:18:22,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362, jenkins-hbase4.apache.org,38647,1689149897534] are moved back to default 2023-07-12 08:18:22,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:22,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:22,549 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38647] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:33532 deadline: 1689149962548, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41817 startCode=1689149901106. As of locationSeqNum=9. 2023-07-12 08:18:22,655 DEBUG [hconnection-0x62be270e-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:22,666 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60850, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:22,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:22,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:22,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:22,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:22,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:22,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:22,707 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:22,710 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38647] ipc.CallRunner(144): callId: 47 service: ClientService methodName: ExecService size: 619 connection: 172.31.14.131:33522 deadline: 1689149962710, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41817 startCode=1689149901106. As of locationSeqNum=9. 2023-07-12 08:18:22,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 18 2023-07-12 08:18:22,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-12 08:18:22,814 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:22,818 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60866, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:22,822 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:22,822 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:22,823 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:22,824 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:22,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-12 08:18:22,831 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:22,837 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:22,837 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:22,837 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:22,837 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:22,838 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:22,838 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb empty. 2023-07-12 08:18:22,838 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8 empty. 2023-07-12 08:18:22,838 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9 empty. 2023-07-12 08:18:22,839 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f empty. 2023-07-12 08:18:22,839 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f empty. 2023-07-12 08:18:22,839 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:22,839 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:22,839 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:22,840 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:22,840 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:22,840 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 08:18:22,890 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:22,891 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => eff2f3cd498b2012f65d5fe1e65052d9, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:22,892 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 027e08cc9f8687683905525a252fa5bb, NAME => 'Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:22,892 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2cf907e22251436df677e8fd6ee97af8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:22,945 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:22,946 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing eff2f3cd498b2012f65d5fe1e65052d9, disabling compactions & flushes 2023-07-12 08:18:22,946 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:22,946 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:22,946 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. after waiting 0 ms 2023-07-12 08:18:22,946 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:22,946 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:22,946 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for eff2f3cd498b2012f65d5fe1e65052d9: 2023-07-12 08:18:22,947 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => cb2badad8695bfb67b04fbaa48dd2d4f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:22,950 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:22,955 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 027e08cc9f8687683905525a252fa5bb, disabling compactions & flushes 2023-07-12 08:18:22,955 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:22,955 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:22,955 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:22,956 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 2cf907e22251436df677e8fd6ee97af8, disabling compactions & flushes 2023-07-12 08:18:22,956 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. after waiting 0 ms 2023-07-12 08:18:22,956 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:22,956 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:22,956 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:22,956 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:22,956 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. after waiting 0 ms 2023-07-12 08:18:22,956 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:22,956 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:22,957 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 2cf907e22251436df677e8fd6ee97af8: 2023-07-12 08:18:22,957 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 74773293fc1208506696c5f04ee5bb8f, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:22,956 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 027e08cc9f8687683905525a252fa5bb: 2023-07-12 08:18:22,983 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:22,984 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 74773293fc1208506696c5f04ee5bb8f, disabling compactions & flushes 2023-07-12 08:18:22,984 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:22,984 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:22,984 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:22,984 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. after waiting 0 ms 2023-07-12 08:18:22,984 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing cb2badad8695bfb67b04fbaa48dd2d4f, disabling compactions & flushes 2023-07-12 08:18:22,984 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:22,984 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:22,985 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:22,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:22,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 74773293fc1208506696c5f04ee5bb8f: 2023-07-12 08:18:22,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. after waiting 0 ms 2023-07-12 08:18:22,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:22,985 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:22,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for cb2badad8695bfb67b04fbaa48dd2d4f: 2023-07-12 08:18:22,989 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:22,990 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149902990"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149902990"}]},"ts":"1689149902990"} 2023-07-12 08:18:22,990 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149902990"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149902990"}]},"ts":"1689149902990"} 2023-07-12 08:18:22,991 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149902990"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149902990"}]},"ts":"1689149902990"} 2023-07-12 08:18:22,991 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149902990"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149902990"}]},"ts":"1689149902990"} 2023-07-12 08:18:22,991 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149902990"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149902990"}]},"ts":"1689149902990"} 2023-07-12 08:18:23,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-12 08:18:23,051 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 08:18:23,052 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:23,052 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149903052"}]},"ts":"1689149903052"} 2023-07-12 08:18:23,054 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 08:18:23,063 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:23,064 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:23,064 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:23,064 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:23,065 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=027e08cc9f8687683905525a252fa5bb, ASSIGN}, {pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cf907e22251436df677e8fd6ee97af8, ASSIGN}, {pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eff2f3cd498b2012f65d5fe1e65052d9, ASSIGN}, {pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2badad8695bfb67b04fbaa48dd2d4f, ASSIGN}, {pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=74773293fc1208506696c5f04ee5bb8f, ASSIGN}] 2023-07-12 08:18:23,068 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cf907e22251436df677e8fd6ee97af8, ASSIGN 2023-07-12 08:18:23,068 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=74773293fc1208506696c5f04ee5bb8f, ASSIGN 2023-07-12 08:18:23,069 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eff2f3cd498b2012f65d5fe1e65052d9, ASSIGN 2023-07-12 08:18:23,069 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=027e08cc9f8687683905525a252fa5bb, ASSIGN 2023-07-12 08:18:23,070 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2badad8695bfb67b04fbaa48dd2d4f, ASSIGN 2023-07-12 08:18:23,070 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=74773293fc1208506696c5f04ee5bb8f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42347,1689149897465; forceNewPlan=false, retain=false 2023-07-12 08:18:23,070 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cf907e22251436df677e8fd6ee97af8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42347,1689149897465; forceNewPlan=false, retain=false 2023-07-12 08:18:23,070 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eff2f3cd498b2012f65d5fe1e65052d9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41817,1689149901106; forceNewPlan=false, retain=false 2023-07-12 08:18:23,071 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=027e08cc9f8687683905525a252fa5bb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41817,1689149901106; forceNewPlan=false, retain=false 2023-07-12 08:18:23,072 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2badad8695bfb67b04fbaa48dd2d4f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41817,1689149901106; forceNewPlan=false, retain=false 2023-07-12 08:18:23,221 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 08:18:23,225 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=74773293fc1208506696c5f04ee5bb8f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:23,225 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=eff2f3cd498b2012f65d5fe1e65052d9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:23,225 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149903225"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149903225"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149903225"}]},"ts":"1689149903225"} 2023-07-12 08:18:23,226 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149903225"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149903225"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149903225"}]},"ts":"1689149903225"} 2023-07-12 08:18:23,226 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=027e08cc9f8687683905525a252fa5bb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:23,226 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149903226"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149903226"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149903226"}]},"ts":"1689149903226"} 2023-07-12 08:18:23,225 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=2cf907e22251436df677e8fd6ee97af8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:23,225 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=cb2badad8695bfb67b04fbaa48dd2d4f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:23,227 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149903225"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149903225"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149903225"}]},"ts":"1689149903225"} 2023-07-12 08:18:23,233 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149903225"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149903225"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149903225"}]},"ts":"1689149903225"} 2023-07-12 08:18:23,234 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=23, state=RUNNABLE; OpenRegionProcedure 74773293fc1208506696c5f04ee5bb8f, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:23,236 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=21, state=RUNNABLE; OpenRegionProcedure eff2f3cd498b2012f65d5fe1e65052d9, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:23,239 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=19, state=RUNNABLE; OpenRegionProcedure 027e08cc9f8687683905525a252fa5bb, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:23,241 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=22, state=RUNNABLE; OpenRegionProcedure cb2badad8695bfb67b04fbaa48dd2d4f, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:23,244 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=20, state=RUNNABLE; OpenRegionProcedure 2cf907e22251436df677e8fd6ee97af8, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:23,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-12 08:18:23,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:23,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 027e08cc9f8687683905525a252fa5bb, NAME => 'Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 08:18:23,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:23,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:23,397 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:23,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:23,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:23,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2cf907e22251436df677e8fd6ee97af8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 08:18:23,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:23,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:23,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:23,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:23,401 INFO [StoreOpener-027e08cc9f8687683905525a252fa5bb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:23,401 INFO [StoreOpener-2cf907e22251436df677e8fd6ee97af8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:23,403 DEBUG [StoreOpener-027e08cc9f8687683905525a252fa5bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb/f 2023-07-12 08:18:23,404 DEBUG [StoreOpener-027e08cc9f8687683905525a252fa5bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb/f 2023-07-12 08:18:23,404 DEBUG [StoreOpener-2cf907e22251436df677e8fd6ee97af8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8/f 2023-07-12 08:18:23,404 DEBUG [StoreOpener-2cf907e22251436df677e8fd6ee97af8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8/f 2023-07-12 08:18:23,404 INFO [StoreOpener-027e08cc9f8687683905525a252fa5bb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 027e08cc9f8687683905525a252fa5bb columnFamilyName f 2023-07-12 08:18:23,404 INFO [StoreOpener-2cf907e22251436df677e8fd6ee97af8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2cf907e22251436df677e8fd6ee97af8 columnFamilyName f 2023-07-12 08:18:23,405 INFO [StoreOpener-027e08cc9f8687683905525a252fa5bb-1] regionserver.HStore(310): Store=027e08cc9f8687683905525a252fa5bb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:23,406 INFO [StoreOpener-2cf907e22251436df677e8fd6ee97af8-1] regionserver.HStore(310): Store=2cf907e22251436df677e8fd6ee97af8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:23,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:23,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:23,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:23,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:23,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:23,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:23,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:23,418 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 027e08cc9f8687683905525a252fa5bb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9891052800, jitterRate=-0.07882392406463623}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:23,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 027e08cc9f8687683905525a252fa5bb: 2023-07-12 08:18:23,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:23,422 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb., pid=26, masterSystemTime=1689149903390 2023-07-12 08:18:23,423 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2cf907e22251436df677e8fd6ee97af8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11971306560, jitterRate=0.11491480469703674}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:23,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2cf907e22251436df677e8fd6ee97af8: 2023-07-12 08:18:23,424 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8., pid=28, masterSystemTime=1689149903390 2023-07-12 08:18:23,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:23,426 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:23,426 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:23,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cb2badad8695bfb67b04fbaa48dd2d4f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 08:18:23,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:23,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:23,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:23,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:23,427 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=027e08cc9f8687683905525a252fa5bb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:23,427 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149903427"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149903427"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149903427"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149903427"}]},"ts":"1689149903427"} 2023-07-12 08:18:23,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:23,429 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=2cf907e22251436df677e8fd6ee97af8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:23,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:23,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:23,435 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149903429"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149903429"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149903429"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149903429"}]},"ts":"1689149903429"} 2023-07-12 08:18:23,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 74773293fc1208506696c5f04ee5bb8f, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 08:18:23,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:23,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:23,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:23,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:23,436 INFO [StoreOpener-cb2badad8695bfb67b04fbaa48dd2d4f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:23,441 DEBUG [StoreOpener-cb2badad8695bfb67b04fbaa48dd2d4f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f/f 2023-07-12 08:18:23,441 DEBUG [StoreOpener-cb2badad8695bfb67b04fbaa48dd2d4f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f/f 2023-07-12 08:18:23,441 INFO [StoreOpener-74773293fc1208506696c5f04ee5bb8f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:23,442 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=19 2023-07-12 08:18:23,442 INFO [StoreOpener-cb2badad8695bfb67b04fbaa48dd2d4f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cb2badad8695bfb67b04fbaa48dd2d4f columnFamilyName f 2023-07-12 08:18:23,442 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=19, state=SUCCESS; OpenRegionProcedure 027e08cc9f8687683905525a252fa5bb, server=jenkins-hbase4.apache.org,41817,1689149901106 in 199 msec 2023-07-12 08:18:23,444 INFO [StoreOpener-cb2badad8695bfb67b04fbaa48dd2d4f-1] regionserver.HStore(310): Store=cb2badad8695bfb67b04fbaa48dd2d4f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:23,445 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=20 2023-07-12 08:18:23,445 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=20, state=SUCCESS; OpenRegionProcedure 2cf907e22251436df677e8fd6ee97af8, server=jenkins-hbase4.apache.org,42347,1689149897465 in 196 msec 2023-07-12 08:18:23,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:23,446 DEBUG [StoreOpener-74773293fc1208506696c5f04ee5bb8f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f/f 2023-07-12 08:18:23,446 DEBUG [StoreOpener-74773293fc1208506696c5f04ee5bb8f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f/f 2023-07-12 08:18:23,447 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=027e08cc9f8687683905525a252fa5bb, ASSIGN in 378 msec 2023-07-12 08:18:23,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:23,447 INFO [StoreOpener-74773293fc1208506696c5f04ee5bb8f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 74773293fc1208506696c5f04ee5bb8f columnFamilyName f 2023-07-12 08:18:23,448 INFO [StoreOpener-74773293fc1208506696c5f04ee5bb8f-1] regionserver.HStore(310): Store=74773293fc1208506696c5f04ee5bb8f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:23,449 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cf907e22251436df677e8fd6ee97af8, ASSIGN in 381 msec 2023-07-12 08:18:23,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:23,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:23,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:23,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:23,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:23,464 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 74773293fc1208506696c5f04ee5bb8f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12041574240, jitterRate=0.1214589923620224}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:23,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:23,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 74773293fc1208506696c5f04ee5bb8f: 2023-07-12 08:18:23,465 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cb2badad8695bfb67b04fbaa48dd2d4f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9603835680, jitterRate=-0.10557310283184052}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:23,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cb2badad8695bfb67b04fbaa48dd2d4f: 2023-07-12 08:18:23,465 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f., pid=24, masterSystemTime=1689149903390 2023-07-12 08:18:23,467 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f., pid=27, masterSystemTime=1689149903390 2023-07-12 08:18:23,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:23,468 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:23,469 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=74773293fc1208506696c5f04ee5bb8f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:23,469 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149903469"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149903469"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149903469"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149903469"}]},"ts":"1689149903469"} 2023-07-12 08:18:23,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:23,469 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:23,469 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:23,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eff2f3cd498b2012f65d5fe1e65052d9, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 08:18:23,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:23,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:23,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:23,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:23,471 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=cb2badad8695bfb67b04fbaa48dd2d4f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:23,472 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149903471"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149903471"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149903471"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149903471"}]},"ts":"1689149903471"} 2023-07-12 08:18:23,475 INFO [StoreOpener-eff2f3cd498b2012f65d5fe1e65052d9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:23,477 DEBUG [StoreOpener-eff2f3cd498b2012f65d5fe1e65052d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9/f 2023-07-12 08:18:23,477 DEBUG [StoreOpener-eff2f3cd498b2012f65d5fe1e65052d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9/f 2023-07-12 08:18:23,479 INFO [StoreOpener-eff2f3cd498b2012f65d5fe1e65052d9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eff2f3cd498b2012f65d5fe1e65052d9 columnFamilyName f 2023-07-12 08:18:23,480 INFO [StoreOpener-eff2f3cd498b2012f65d5fe1e65052d9-1] regionserver.HStore(310): Store=eff2f3cd498b2012f65d5fe1e65052d9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:23,480 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=23 2023-07-12 08:18:23,481 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=23, state=SUCCESS; OpenRegionProcedure 74773293fc1208506696c5f04ee5bb8f, server=jenkins-hbase4.apache.org,42347,1689149897465 in 240 msec 2023-07-12 08:18:23,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:23,483 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=22 2023-07-12 08:18:23,483 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=22, state=SUCCESS; OpenRegionProcedure cb2badad8695bfb67b04fbaa48dd2d4f, server=jenkins-hbase4.apache.org,41817,1689149901106 in 236 msec 2023-07-12 08:18:23,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:23,484 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=74773293fc1208506696c5f04ee5bb8f, ASSIGN in 417 msec 2023-07-12 08:18:23,488 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:23,488 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2badad8695bfb67b04fbaa48dd2d4f, ASSIGN in 419 msec 2023-07-12 08:18:23,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:23,493 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eff2f3cd498b2012f65d5fe1e65052d9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10279801440, jitterRate=-0.04261888563632965}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:23,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eff2f3cd498b2012f65d5fe1e65052d9: 2023-07-12 08:18:23,494 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9., pid=25, masterSystemTime=1689149903390 2023-07-12 08:18:23,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:23,496 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:23,497 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=eff2f3cd498b2012f65d5fe1e65052d9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:23,497 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149903497"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149903497"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149903497"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149903497"}]},"ts":"1689149903497"} 2023-07-12 08:18:23,503 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=21 2023-07-12 08:18:23,503 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; OpenRegionProcedure eff2f3cd498b2012f65d5fe1e65052d9, server=jenkins-hbase4.apache.org,41817,1689149901106 in 264 msec 2023-07-12 08:18:23,512 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=18 2023-07-12 08:18:23,512 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eff2f3cd498b2012f65d5fe1e65052d9, ASSIGN in 439 msec 2023-07-12 08:18:23,514 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:23,515 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149903515"}]},"ts":"1689149903515"} 2023-07-12 08:18:23,517 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 08:18:23,523 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:23,531 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 821 msec 2023-07-12 08:18:23,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-12 08:18:23,833 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 18 completed 2023-07-12 08:18:23,833 DEBUG [Listener at localhost/44853] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-12 08:18:23,834 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:23,841 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-12 08:18:23,842 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:23,842 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-12 08:18:23,843 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:23,848 DEBUG [Listener at localhost/44853] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:23,852 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48628, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:23,855 DEBUG [Listener at localhost/44853] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:23,858 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33542, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:23,859 DEBUG [Listener at localhost/44853] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:23,861 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60868, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:23,863 DEBUG [Listener at localhost/44853] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:23,865 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34022, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:23,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:23,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:23,876 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:23,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:23,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:23,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:23,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:23,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:23,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:23,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region 027e08cc9f8687683905525a252fa5bb to RSGroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:23,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:23,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:23,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:23,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:23,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:23,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=027e08cc9f8687683905525a252fa5bb, REOPEN/MOVE 2023-07-12 08:18:23,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region 2cf907e22251436df677e8fd6ee97af8 to RSGroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:23,898 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=027e08cc9f8687683905525a252fa5bb, REOPEN/MOVE 2023-07-12 08:18:23,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:23,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:23,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:23,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:23,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:23,899 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=027e08cc9f8687683905525a252fa5bb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:23,900 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149903899"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149903899"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149903899"}]},"ts":"1689149903899"} 2023-07-12 08:18:23,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cf907e22251436df677e8fd6ee97af8, REOPEN/MOVE 2023-07-12 08:18:23,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region eff2f3cd498b2012f65d5fe1e65052d9 to RSGroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:23,901 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cf907e22251436df677e8fd6ee97af8, REOPEN/MOVE 2023-07-12 08:18:23,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:23,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:23,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:23,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:23,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:23,902 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=2cf907e22251436df677e8fd6ee97af8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:23,903 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149903902"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149903902"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149903902"}]},"ts":"1689149903902"} 2023-07-12 08:18:23,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eff2f3cd498b2012f65d5fe1e65052d9, REOPEN/MOVE 2023-07-12 08:18:23,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region cb2badad8695bfb67b04fbaa48dd2d4f to RSGroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:23,903 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eff2f3cd498b2012f65d5fe1e65052d9, REOPEN/MOVE 2023-07-12 08:18:23,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:23,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:23,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:23,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:23,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:23,905 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=eff2f3cd498b2012f65d5fe1e65052d9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:23,905 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149903905"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149903905"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149903905"}]},"ts":"1689149903905"} 2023-07-12 08:18:23,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2badad8695bfb67b04fbaa48dd2d4f, REOPEN/MOVE 2023-07-12 08:18:23,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region 74773293fc1208506696c5f04ee5bb8f to RSGroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:23,906 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2badad8695bfb67b04fbaa48dd2d4f, REOPEN/MOVE 2023-07-12 08:18:23,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:23,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:23,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:23,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:23,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:23,908 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=29, state=RUNNABLE; CloseRegionProcedure 027e08cc9f8687683905525a252fa5bb, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:23,908 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=cb2badad8695bfb67b04fbaa48dd2d4f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:23,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=74773293fc1208506696c5f04ee5bb8f, REOPEN/MOVE 2023-07-12 08:18:23,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_169286504, current retry=0 2023-07-12 08:18:23,908 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149903908"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149903908"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149903908"}]},"ts":"1689149903908"} 2023-07-12 08:18:23,909 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=74773293fc1208506696c5f04ee5bb8f, REOPEN/MOVE 2023-07-12 08:18:23,911 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=74773293fc1208506696c5f04ee5bb8f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:23,911 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149903911"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149903911"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149903911"}]},"ts":"1689149903911"} 2023-07-12 08:18:23,912 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=30, state=RUNNABLE; CloseRegionProcedure 2cf907e22251436df677e8fd6ee97af8, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:23,912 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=31, state=RUNNABLE; CloseRegionProcedure eff2f3cd498b2012f65d5fe1e65052d9, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:23,915 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=32, state=RUNNABLE; CloseRegionProcedure cb2badad8695bfb67b04fbaa48dd2d4f, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:23,917 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=33, state=RUNNABLE; CloseRegionProcedure 74773293fc1208506696c5f04ee5bb8f, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:24,064 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:24,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eff2f3cd498b2012f65d5fe1e65052d9, disabling compactions & flushes 2023-07-12 08:18:24,066 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:24,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:24,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. after waiting 0 ms 2023-07-12 08:18:24,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:24,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:24,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2cf907e22251436df677e8fd6ee97af8, disabling compactions & flushes 2023-07-12 08:18:24,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:24,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:24,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. after waiting 0 ms 2023-07-12 08:18:24,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:24,075 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:24,077 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:24,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eff2f3cd498b2012f65d5fe1e65052d9: 2023-07-12 08:18:24,077 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding eff2f3cd498b2012f65d5fe1e65052d9 move to jenkins-hbase4.apache.org,38647,1689149897534 record at close sequenceid=2 2023-07-12 08:18:24,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:24,085 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:24,085 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=eff2f3cd498b2012f65d5fe1e65052d9, regionState=CLOSED 2023-07-12 08:18:24,085 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904085"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149904085"}]},"ts":"1689149904085"} 2023-07-12 08:18:24,091 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 027e08cc9f8687683905525a252fa5bb, disabling compactions & flushes 2023-07-12 08:18:24,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:24,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:24,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. after waiting 0 ms 2023-07-12 08:18:24,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:24,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:24,097 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=31 2023-07-12 08:18:24,097 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=31, state=SUCCESS; CloseRegionProcedure eff2f3cd498b2012f65d5fe1e65052d9, server=jenkins-hbase4.apache.org,41817,1689149901106 in 176 msec 2023-07-12 08:18:24,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:24,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2cf907e22251436df677e8fd6ee97af8: 2023-07-12 08:18:24,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2cf907e22251436df677e8fd6ee97af8 move to jenkins-hbase4.apache.org,38647,1689149897534 record at close sequenceid=2 2023-07-12 08:18:24,098 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eff2f3cd498b2012f65d5fe1e65052d9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38647,1689149897534; forceNewPlan=false, retain=false 2023-07-12 08:18:24,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:24,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:24,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 74773293fc1208506696c5f04ee5bb8f, disabling compactions & flushes 2023-07-12 08:18:24,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:24,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:24,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. after waiting 0 ms 2023-07-12 08:18:24,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:24,105 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=2cf907e22251436df677e8fd6ee97af8, regionState=CLOSED 2023-07-12 08:18:24,105 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904105"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149904105"}]},"ts":"1689149904105"} 2023-07-12 08:18:24,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:24,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:24,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 027e08cc9f8687683905525a252fa5bb: 2023-07-12 08:18:24,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 027e08cc9f8687683905525a252fa5bb move to jenkins-hbase4.apache.org,38647,1689149897534 record at close sequenceid=2 2023-07-12 08:18:24,118 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=30 2023-07-12 08:18:24,119 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=30, state=SUCCESS; CloseRegionProcedure 2cf907e22251436df677e8fd6ee97af8, server=jenkins-hbase4.apache.org,42347,1689149897465 in 196 msec 2023-07-12 08:18:24,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:24,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:24,121 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cf907e22251436df677e8fd6ee97af8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38647,1689149897534; forceNewPlan=false, retain=false 2023-07-12 08:18:24,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cb2badad8695bfb67b04fbaa48dd2d4f, disabling compactions & flushes 2023-07-12 08:18:24,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:24,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:24,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. after waiting 0 ms 2023-07-12 08:18:24,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:24,130 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=027e08cc9f8687683905525a252fa5bb, regionState=CLOSED 2023-07-12 08:18:24,130 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149904130"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149904130"}]},"ts":"1689149904130"} 2023-07-12 08:18:24,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:24,143 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=29 2023-07-12 08:18:24,143 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=29, state=SUCCESS; CloseRegionProcedure 027e08cc9f8687683905525a252fa5bb, server=jenkins-hbase4.apache.org,41817,1689149901106 in 224 msec 2023-07-12 08:18:24,144 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:24,144 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 74773293fc1208506696c5f04ee5bb8f: 2023-07-12 08:18:24,144 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 74773293fc1208506696c5f04ee5bb8f move to jenkins-hbase4.apache.org,38647,1689149897534 record at close sequenceid=2 2023-07-12 08:18:24,145 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=027e08cc9f8687683905525a252fa5bb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38647,1689149897534; forceNewPlan=false, retain=false 2023-07-12 08:18:24,149 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:24,150 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:24,150 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=74773293fc1208506696c5f04ee5bb8f, regionState=CLOSED 2023-07-12 08:18:24,150 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149904150"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149904150"}]},"ts":"1689149904150"} 2023-07-12 08:18:24,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:24,151 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cb2badad8695bfb67b04fbaa48dd2d4f: 2023-07-12 08:18:24,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cb2badad8695bfb67b04fbaa48dd2d4f move to jenkins-hbase4.apache.org,36999,1689149897362 record at close sequenceid=2 2023-07-12 08:18:24,154 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:24,155 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=cb2badad8695bfb67b04fbaa48dd2d4f, regionState=CLOSED 2023-07-12 08:18:24,155 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904155"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149904155"}]},"ts":"1689149904155"} 2023-07-12 08:18:24,159 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=33 2023-07-12 08:18:24,159 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=33, state=SUCCESS; CloseRegionProcedure 74773293fc1208506696c5f04ee5bb8f, server=jenkins-hbase4.apache.org,42347,1689149897465 in 237 msec 2023-07-12 08:18:24,161 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=74773293fc1208506696c5f04ee5bb8f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38647,1689149897534; forceNewPlan=false, retain=false 2023-07-12 08:18:24,172 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=32 2023-07-12 08:18:24,173 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=32, state=SUCCESS; CloseRegionProcedure cb2badad8695bfb67b04fbaa48dd2d4f, server=jenkins-hbase4.apache.org,41817,1689149901106 in 243 msec 2023-07-12 08:18:24,174 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2badad8695bfb67b04fbaa48dd2d4f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36999,1689149897362; forceNewPlan=false, retain=false 2023-07-12 08:18:24,249 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 08:18:24,249 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=74773293fc1208506696c5f04ee5bb8f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,249 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149904249"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149904249"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149904249"}]},"ts":"1689149904249"} 2023-07-12 08:18:24,250 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=eff2f3cd498b2012f65d5fe1e65052d9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,250 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=2cf907e22251436df677e8fd6ee97af8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,250 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904250"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149904250"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149904250"}]},"ts":"1689149904250"} 2023-07-12 08:18:24,250 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904250"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149904250"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149904250"}]},"ts":"1689149904250"} 2023-07-12 08:18:24,251 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=027e08cc9f8687683905525a252fa5bb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,251 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149904251"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149904251"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149904251"}]},"ts":"1689149904251"} 2023-07-12 08:18:24,255 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=cb2badad8695bfb67b04fbaa48dd2d4f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:24,255 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904255"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149904255"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149904255"}]},"ts":"1689149904255"} 2023-07-12 08:18:24,256 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=33, state=RUNNABLE; OpenRegionProcedure 74773293fc1208506696c5f04ee5bb8f, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:24,259 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=31, state=RUNNABLE; OpenRegionProcedure eff2f3cd498b2012f65d5fe1e65052d9, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:24,260 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=29, state=RUNNABLE; OpenRegionProcedure 027e08cc9f8687683905525a252fa5bb, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:24,262 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=30, state=RUNNABLE; OpenRegionProcedure 2cf907e22251436df677e8fd6ee97af8, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:24,263 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=32, state=RUNNABLE; OpenRegionProcedure cb2badad8695bfb67b04fbaa48dd2d4f, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:24,416 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:24,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 74773293fc1208506696c5f04ee5bb8f, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 08:18:24,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:24,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:24,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:24,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:24,419 INFO [StoreOpener-74773293fc1208506696c5f04ee5bb8f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:24,420 DEBUG [StoreOpener-74773293fc1208506696c5f04ee5bb8f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f/f 2023-07-12 08:18:24,421 DEBUG [StoreOpener-74773293fc1208506696c5f04ee5bb8f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f/f 2023-07-12 08:18:24,421 INFO [StoreOpener-74773293fc1208506696c5f04ee5bb8f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 74773293fc1208506696c5f04ee5bb8f columnFamilyName f 2023-07-12 08:18:24,422 INFO [StoreOpener-74773293fc1208506696c5f04ee5bb8f-1] regionserver.HStore(310): Store=74773293fc1208506696c5f04ee5bb8f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:24,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:24,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:24,427 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:24,427 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:24,431 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48642, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:24,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:24,436 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:24,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cb2badad8695bfb67b04fbaa48dd2d4f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 08:18:24,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:24,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:24,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:24,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:24,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 74773293fc1208506696c5f04ee5bb8f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9642220000, jitterRate=-0.10199828445911407}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:24,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 74773293fc1208506696c5f04ee5bb8f: 2023-07-12 08:18:24,440 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f., pid=39, masterSystemTime=1689149904411 2023-07-12 08:18:24,442 INFO [StoreOpener-cb2badad8695bfb67b04fbaa48dd2d4f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:24,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:24,442 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:24,443 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=74773293fc1208506696c5f04ee5bb8f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:24,443 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149904442"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149904442"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149904442"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149904442"}]},"ts":"1689149904442"} 2023-07-12 08:18:24,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2cf907e22251436df677e8fd6ee97af8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 08:18:24,443 DEBUG [StoreOpener-cb2badad8695bfb67b04fbaa48dd2d4f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f/f 2023-07-12 08:18:24,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:24,444 DEBUG [StoreOpener-cb2badad8695bfb67b04fbaa48dd2d4f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f/f 2023-07-12 08:18:24,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:24,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:24,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:24,445 INFO [StoreOpener-cb2badad8695bfb67b04fbaa48dd2d4f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cb2badad8695bfb67b04fbaa48dd2d4f columnFamilyName f 2023-07-12 08:18:24,445 INFO [StoreOpener-cb2badad8695bfb67b04fbaa48dd2d4f-1] regionserver.HStore(310): Store=cb2badad8695bfb67b04fbaa48dd2d4f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:24,449 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=33 2023-07-12 08:18:24,451 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=74773293fc1208506696c5f04ee5bb8f, REOPEN/MOVE in 542 msec 2023-07-12 08:18:24,455 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=33, state=SUCCESS; OpenRegionProcedure 74773293fc1208506696c5f04ee5bb8f, server=jenkins-hbase4.apache.org,38647,1689149897534 in 190 msec 2023-07-12 08:18:24,455 INFO [StoreOpener-2cf907e22251436df677e8fd6ee97af8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:24,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:24,457 DEBUG [StoreOpener-2cf907e22251436df677e8fd6ee97af8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8/f 2023-07-12 08:18:24,457 DEBUG [StoreOpener-2cf907e22251436df677e8fd6ee97af8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8/f 2023-07-12 08:18:24,457 INFO [StoreOpener-2cf907e22251436df677e8fd6ee97af8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2cf907e22251436df677e8fd6ee97af8 columnFamilyName f 2023-07-12 08:18:24,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:24,459 INFO [StoreOpener-2cf907e22251436df677e8fd6ee97af8-1] regionserver.HStore(310): Store=2cf907e22251436df677e8fd6ee97af8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:24,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:24,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:24,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:24,471 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2cf907e22251436df677e8fd6ee97af8; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10244284960, jitterRate=-0.045926615595817566}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:24,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2cf907e22251436df677e8fd6ee97af8: 2023-07-12 08:18:24,472 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8., pid=42, masterSystemTime=1689149904411 2023-07-12 08:18:24,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:24,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:24,475 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:24,476 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:24,476 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=2cf907e22251436df677e8fd6ee97af8, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 027e08cc9f8687683905525a252fa5bb, NAME => 'Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 08:18:24,476 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904476"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149904476"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149904476"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149904476"}]},"ts":"1689149904476"} 2023-07-12 08:18:24,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:24,476 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cb2badad8695bfb67b04fbaa48dd2d4f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11984432960, jitterRate=0.11613729596138}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:24,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cb2badad8695bfb67b04fbaa48dd2d4f: 2023-07-12 08:18:24,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:24,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:24,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:24,481 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f., pid=43, masterSystemTime=1689149904427 2023-07-12 08:18:24,489 INFO [StoreOpener-027e08cc9f8687683905525a252fa5bb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:24,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:24,491 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:24,492 DEBUG [StoreOpener-027e08cc9f8687683905525a252fa5bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb/f 2023-07-12 08:18:24,492 DEBUG [StoreOpener-027e08cc9f8687683905525a252fa5bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb/f 2023-07-12 08:18:24,492 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=cb2badad8695bfb67b04fbaa48dd2d4f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:24,492 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904492"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149904492"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149904492"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149904492"}]},"ts":"1689149904492"} 2023-07-12 08:18:24,492 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=30 2023-07-12 08:18:24,492 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=30, state=SUCCESS; OpenRegionProcedure 2cf907e22251436df677e8fd6ee97af8, server=jenkins-hbase4.apache.org,38647,1689149897534 in 226 msec 2023-07-12 08:18:24,492 INFO [StoreOpener-027e08cc9f8687683905525a252fa5bb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 027e08cc9f8687683905525a252fa5bb columnFamilyName f 2023-07-12 08:18:24,494 INFO [StoreOpener-027e08cc9f8687683905525a252fa5bb-1] regionserver.HStore(310): Store=027e08cc9f8687683905525a252fa5bb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:24,495 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cf907e22251436df677e8fd6ee97af8, REOPEN/MOVE in 594 msec 2023-07-12 08:18:24,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:24,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:24,502 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=32 2023-07-12 08:18:24,502 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=32, state=SUCCESS; OpenRegionProcedure cb2badad8695bfb67b04fbaa48dd2d4f, server=jenkins-hbase4.apache.org,36999,1689149897362 in 232 msec 2023-07-12 08:18:24,509 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2badad8695bfb67b04fbaa48dd2d4f, REOPEN/MOVE in 598 msec 2023-07-12 08:18:24,511 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:24,512 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 027e08cc9f8687683905525a252fa5bb; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11451522720, jitterRate=0.06650616228580475}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:24,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 027e08cc9f8687683905525a252fa5bb: 2023-07-12 08:18:24,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb., pid=41, masterSystemTime=1689149904411 2023-07-12 08:18:24,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:24,517 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:24,517 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:24,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eff2f3cd498b2012f65d5fe1e65052d9, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 08:18:24,517 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=027e08cc9f8687683905525a252fa5bb, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:24,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:24,518 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149904517"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149904517"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149904517"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149904517"}]},"ts":"1689149904517"} 2023-07-12 08:18:24,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:24,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:24,521 INFO [StoreOpener-eff2f3cd498b2012f65d5fe1e65052d9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:24,522 DEBUG [StoreOpener-eff2f3cd498b2012f65d5fe1e65052d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9/f 2023-07-12 08:18:24,523 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=29 2023-07-12 08:18:24,523 DEBUG [StoreOpener-eff2f3cd498b2012f65d5fe1e65052d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9/f 2023-07-12 08:18:24,523 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=29, state=SUCCESS; OpenRegionProcedure 027e08cc9f8687683905525a252fa5bb, server=jenkins-hbase4.apache.org,38647,1689149897534 in 260 msec 2023-07-12 08:18:24,523 INFO [StoreOpener-eff2f3cd498b2012f65d5fe1e65052d9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eff2f3cd498b2012f65d5fe1e65052d9 columnFamilyName f 2023-07-12 08:18:24,524 INFO [StoreOpener-eff2f3cd498b2012f65d5fe1e65052d9-1] regionserver.HStore(310): Store=eff2f3cd498b2012f65d5fe1e65052d9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:24,525 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=027e08cc9f8687683905525a252fa5bb, REOPEN/MOVE in 627 msec 2023-07-12 08:18:24,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:24,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:24,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:24,531 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eff2f3cd498b2012f65d5fe1e65052d9; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10917258080, jitterRate=0.016748890280723572}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:24,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eff2f3cd498b2012f65d5fe1e65052d9: 2023-07-12 08:18:24,532 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9., pid=40, masterSystemTime=1689149904411 2023-07-12 08:18:24,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:24,534 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:24,534 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=eff2f3cd498b2012f65d5fe1e65052d9, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,535 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904534"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149904534"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149904534"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149904534"}]},"ts":"1689149904534"} 2023-07-12 08:18:24,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=31 2023-07-12 08:18:24,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=31, state=SUCCESS; OpenRegionProcedure eff2f3cd498b2012f65d5fe1e65052d9, server=jenkins-hbase4.apache.org,38647,1689149897534 in 278 msec 2023-07-12 08:18:24,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eff2f3cd498b2012f65d5fe1e65052d9, REOPEN/MOVE in 638 msec 2023-07-12 08:18:24,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure.ProcedureSyncWait(216): waitFor pid=29 2023-07-12 08:18:24,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_169286504. 2023-07-12 08:18:24,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:24,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:24,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:24,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:24,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:24,922 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:24,929 INFO [Listener at localhost/44853] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:24,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:24,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:24,949 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149904949"}]},"ts":"1689149904949"} 2023-07-12 08:18:24,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-12 08:18:24,951 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 08:18:24,953 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 08:18:24,958 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=027e08cc9f8687683905525a252fa5bb, UNASSIGN}, {pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cf907e22251436df677e8fd6ee97af8, UNASSIGN}, {pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eff2f3cd498b2012f65d5fe1e65052d9, UNASSIGN}, {pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2badad8695bfb67b04fbaa48dd2d4f, UNASSIGN}, {pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=74773293fc1208506696c5f04ee5bb8f, UNASSIGN}] 2023-07-12 08:18:24,960 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=027e08cc9f8687683905525a252fa5bb, UNASSIGN 2023-07-12 08:18:24,960 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cf907e22251436df677e8fd6ee97af8, UNASSIGN 2023-07-12 08:18:24,960 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=74773293fc1208506696c5f04ee5bb8f, UNASSIGN 2023-07-12 08:18:24,961 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2badad8695bfb67b04fbaa48dd2d4f, UNASSIGN 2023-07-12 08:18:24,961 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eff2f3cd498b2012f65d5fe1e65052d9, UNASSIGN 2023-07-12 08:18:24,961 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=027e08cc9f8687683905525a252fa5bb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,961 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=2cf907e22251436df677e8fd6ee97af8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,962 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149904961"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149904961"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149904961"}]},"ts":"1689149904961"} 2023-07-12 08:18:24,962 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904961"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149904961"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149904961"}]},"ts":"1689149904961"} 2023-07-12 08:18:24,962 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=74773293fc1208506696c5f04ee5bb8f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,962 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=eff2f3cd498b2012f65d5fe1e65052d9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:24,962 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=cb2badad8695bfb67b04fbaa48dd2d4f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:24,962 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149904962"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149904962"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149904962"}]},"ts":"1689149904962"} 2023-07-12 08:18:24,962 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904962"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149904962"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149904962"}]},"ts":"1689149904962"} 2023-07-12 08:18:24,962 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149904962"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149904962"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149904962"}]},"ts":"1689149904962"} 2023-07-12 08:18:24,964 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=45, state=RUNNABLE; CloseRegionProcedure 027e08cc9f8687683905525a252fa5bb, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:24,965 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=46, state=RUNNABLE; CloseRegionProcedure 2cf907e22251436df677e8fd6ee97af8, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:24,967 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=49, state=RUNNABLE; CloseRegionProcedure 74773293fc1208506696c5f04ee5bb8f, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:24,968 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=48, state=RUNNABLE; CloseRegionProcedure cb2badad8695bfb67b04fbaa48dd2d4f, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:24,969 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=47, state=RUNNABLE; CloseRegionProcedure eff2f3cd498b2012f65d5fe1e65052d9, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:25,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-12 08:18:25,117 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:25,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 74773293fc1208506696c5f04ee5bb8f, disabling compactions & flushes 2023-07-12 08:18:25,118 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:25,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:25,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. after waiting 0 ms 2023-07-12 08:18:25,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:25,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 08:18:25,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f. 2023-07-12 08:18:25,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 74773293fc1208506696c5f04ee5bb8f: 2023-07-12 08:18:25,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:25,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cb2badad8695bfb67b04fbaa48dd2d4f, disabling compactions & flushes 2023-07-12 08:18:25,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:25,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:25,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. after waiting 0 ms 2023-07-12 08:18:25,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:25,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:25,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:25,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2cf907e22251436df677e8fd6ee97af8, disabling compactions & flushes 2023-07-12 08:18:25,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:25,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:25,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. after waiting 0 ms 2023-07-12 08:18:25,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:25,132 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=74773293fc1208506696c5f04ee5bb8f, regionState=CLOSED 2023-07-12 08:18:25,133 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149905132"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149905132"}]},"ts":"1689149905132"} 2023-07-12 08:18:25,142 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=49 2023-07-12 08:18:25,142 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; CloseRegionProcedure 74773293fc1208506696c5f04ee5bb8f, server=jenkins-hbase4.apache.org,38647,1689149897534 in 172 msec 2023-07-12 08:18:25,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 08:18:25,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=74773293fc1208506696c5f04ee5bb8f, UNASSIGN in 187 msec 2023-07-12 08:18:25,150 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 08:18:25,150 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f. 2023-07-12 08:18:25,150 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cb2badad8695bfb67b04fbaa48dd2d4f: 2023-07-12 08:18:25,152 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8. 2023-07-12 08:18:25,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2cf907e22251436df677e8fd6ee97af8: 2023-07-12 08:18:25,153 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:25,154 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=cb2badad8695bfb67b04fbaa48dd2d4f, regionState=CLOSED 2023-07-12 08:18:25,154 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149905154"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149905154"}]},"ts":"1689149905154"} 2023-07-12 08:18:25,154 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:25,154 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:25,155 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eff2f3cd498b2012f65d5fe1e65052d9, disabling compactions & flushes 2023-07-12 08:18:25,155 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:25,155 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:25,155 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. after waiting 0 ms 2023-07-12 08:18:25,155 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:25,155 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=2cf907e22251436df677e8fd6ee97af8, regionState=CLOSED 2023-07-12 08:18:25,156 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149905155"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149905155"}]},"ts":"1689149905155"} 2023-07-12 08:18:25,161 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 08:18:25,163 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=48 2023-07-12 08:18:25,163 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9. 2023-07-12 08:18:25,163 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=48, state=SUCCESS; CloseRegionProcedure cb2badad8695bfb67b04fbaa48dd2d4f, server=jenkins-hbase4.apache.org,36999,1689149897362 in 189 msec 2023-07-12 08:18:25,163 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eff2f3cd498b2012f65d5fe1e65052d9: 2023-07-12 08:18:25,164 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=46 2023-07-12 08:18:25,165 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=46, state=SUCCESS; CloseRegionProcedure 2cf907e22251436df677e8fd6ee97af8, server=jenkins-hbase4.apache.org,38647,1689149897534 in 194 msec 2023-07-12 08:18:25,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:25,166 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cb2badad8695bfb67b04fbaa48dd2d4f, UNASSIGN in 208 msec 2023-07-12 08:18:25,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:25,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 027e08cc9f8687683905525a252fa5bb, disabling compactions & flushes 2023-07-12 08:18:25,167 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:25,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:25,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. after waiting 0 ms 2023-07-12 08:18:25,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:25,167 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=eff2f3cd498b2012f65d5fe1e65052d9, regionState=CLOSED 2023-07-12 08:18:25,167 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149905167"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149905167"}]},"ts":"1689149905167"} 2023-07-12 08:18:25,168 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cf907e22251436df677e8fd6ee97af8, UNASSIGN in 210 msec 2023-07-12 08:18:25,173 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=47 2023-07-12 08:18:25,173 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=47, state=SUCCESS; CloseRegionProcedure eff2f3cd498b2012f65d5fe1e65052d9, server=jenkins-hbase4.apache.org,38647,1689149897534 in 200 msec 2023-07-12 08:18:25,175 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 08:18:25,176 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eff2f3cd498b2012f65d5fe1e65052d9, UNASSIGN in 218 msec 2023-07-12 08:18:25,176 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb. 2023-07-12 08:18:25,176 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 027e08cc9f8687683905525a252fa5bb: 2023-07-12 08:18:25,178 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:25,178 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=027e08cc9f8687683905525a252fa5bb, regionState=CLOSED 2023-07-12 08:18:25,178 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149905178"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149905178"}]},"ts":"1689149905178"} 2023-07-12 08:18:25,182 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=45 2023-07-12 08:18:25,182 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=45, state=SUCCESS; CloseRegionProcedure 027e08cc9f8687683905525a252fa5bb, server=jenkins-hbase4.apache.org,38647,1689149897534 in 216 msec 2023-07-12 08:18:25,185 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=44 2023-07-12 08:18:25,185 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=027e08cc9f8687683905525a252fa5bb, UNASSIGN in 227 msec 2023-07-12 08:18:25,186 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149905186"}]},"ts":"1689149905186"} 2023-07-12 08:18:25,187 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 08:18:25,189 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 08:18:25,192 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=44, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 253 msec 2023-07-12 08:18:25,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-12 08:18:25,255 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 44 completed 2023-07-12 08:18:25,256 INFO [Listener at localhost/44853] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:25,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:25,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=55, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-12 08:18:25,278 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-12 08:18:25,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 08:18:25,292 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:25,292 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:25,292 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:25,292 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:25,292 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:25,299 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8/recovered.edits] 2023-07-12 08:18:25,299 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9/recovered.edits] 2023-07-12 08:18:25,299 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f/recovered.edits] 2023-07-12 08:18:25,300 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f/recovered.edits] 2023-07-12 08:18:25,300 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb/recovered.edits] 2023-07-12 08:18:25,307 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 08:18:25,365 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8/recovered.edits/7.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8/recovered.edits/7.seqid 2023-07-12 08:18:25,369 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cf907e22251436df677e8fd6ee97af8 2023-07-12 08:18:25,369 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9/recovered.edits/7.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9/recovered.edits/7.seqid 2023-07-12 08:18:25,370 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eff2f3cd498b2012f65d5fe1e65052d9 2023-07-12 08:18:25,370 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f/recovered.edits/7.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f/recovered.edits/7.seqid 2023-07-12 08:18:25,373 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cb2badad8695bfb67b04fbaa48dd2d4f 2023-07-12 08:18:25,374 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f/recovered.edits/7.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f/recovered.edits/7.seqid 2023-07-12 08:18:25,375 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/74773293fc1208506696c5f04ee5bb8f 2023-07-12 08:18:25,375 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb/recovered.edits/7.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb/recovered.edits/7.seqid 2023-07-12 08:18:25,376 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/027e08cc9f8687683905525a252fa5bb 2023-07-12 08:18:25,376 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 08:18:25,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 08:18:25,450 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 08:18:25,461 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 08:18:25,462 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 08:18:25,463 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149905462"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:25,463 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149905462"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:25,463 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149905462"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:25,463 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149905462"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:25,463 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149905462"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:25,467 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 08:18:25,468 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 08:18:25,468 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 027e08cc9f8687683905525a252fa5bb, NAME => 'Group_testTableMoveTruncateAndDrop,,1689149902700.027e08cc9f8687683905525a252fa5bb.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 2cf907e22251436df677e8fd6ee97af8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689149902700.2cf907e22251436df677e8fd6ee97af8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => eff2f3cd498b2012f65d5fe1e65052d9, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149902700.eff2f3cd498b2012f65d5fe1e65052d9.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => cb2badad8695bfb67b04fbaa48dd2d4f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149902700.cb2badad8695bfb67b04fbaa48dd2d4f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 74773293fc1208506696c5f04ee5bb8f, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689149902700.74773293fc1208506696c5f04ee5bb8f.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 08:18:25,468 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 08:18:25,468 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689149905468"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:25,470 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 08:18:25,481 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:25,482 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:25,482 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:25,482 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:25,481 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:25,482 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5 empty. 2023-07-12 08:18:25,482 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a empty. 2023-07-12 08:18:25,483 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:25,483 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad empty. 2023-07-12 08:18:25,483 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133 empty. 2023-07-12 08:18:25,484 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:25,484 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:25,484 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:25,484 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3 empty. 2023-07-12 08:18:25,485 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:25,485 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 08:18:25,620 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 08:18:25,621 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-12 08:18:25,621 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 08:18:25,622 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-12 08:18:25,622 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 08:18:25,622 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-12 08:18:25,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 08:18:25,917 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:25,919 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => c5e8f79cbb84b693e87ad11a0026d133, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:25,919 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => bbcdf6f604979a942b38826c25d6ced5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:25,919 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => f6540f8afa5608b8573f19613e2917b3, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:25,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 08:18:25,963 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:25,963 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing c5e8f79cbb84b693e87ad11a0026d133, disabling compactions & flushes 2023-07-12 08:18:25,963 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. 2023-07-12 08:18:25,963 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. 2023-07-12 08:18:25,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. after waiting 0 ms 2023-07-12 08:18:25,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. 2023-07-12 08:18:25,964 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. 2023-07-12 08:18:25,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for c5e8f79cbb84b693e87ad11a0026d133: 2023-07-12 08:18:25,964 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 18dfe9345144bfc126a93d7f5f95137a, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:25,967 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:25,967 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing f6540f8afa5608b8573f19613e2917b3, disabling compactions & flushes 2023-07-12 08:18:25,967 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. 2023-07-12 08:18:25,967 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. 2023-07-12 08:18:25,967 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. after waiting 0 ms 2023-07-12 08:18:25,967 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. 2023-07-12 08:18:25,967 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. 2023-07-12 08:18:25,967 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for f6540f8afa5608b8573f19613e2917b3: 2023-07-12 08:18:25,968 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 64f609e53e2e896749c6edda653b25ad, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:25,991 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:25,991 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 18dfe9345144bfc126a93d7f5f95137a, disabling compactions & flushes 2023-07-12 08:18:25,991 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. 2023-07-12 08:18:25,992 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. 2023-07-12 08:18:25,992 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. after waiting 0 ms 2023-07-12 08:18:25,992 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. 2023-07-12 08:18:25,992 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. 2023-07-12 08:18:25,992 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 18dfe9345144bfc126a93d7f5f95137a: 2023-07-12 08:18:25,996 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:25,996 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 64f609e53e2e896749c6edda653b25ad, disabling compactions & flushes 2023-07-12 08:18:25,996 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. 2023-07-12 08:18:25,996 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. 2023-07-12 08:18:25,996 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. after waiting 0 ms 2023-07-12 08:18:25,996 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. 2023-07-12 08:18:25,996 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. 2023-07-12 08:18:25,996 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 64f609e53e2e896749c6edda653b25ad: 2023-07-12 08:18:26,367 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:26,367 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing bbcdf6f604979a942b38826c25d6ced5, disabling compactions & flushes 2023-07-12 08:18:26,367 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. 2023-07-12 08:18:26,367 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. 2023-07-12 08:18:26,367 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. after waiting 0 ms 2023-07-12 08:18:26,367 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. 2023-07-12 08:18:26,367 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. 2023-07-12 08:18:26,367 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for bbcdf6f604979a942b38826c25d6ced5: 2023-07-12 08:18:26,371 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149906371"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149906371"}]},"ts":"1689149906371"} 2023-07-12 08:18:26,372 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149906371"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149906371"}]},"ts":"1689149906371"} 2023-07-12 08:18:26,372 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149906371"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149906371"}]},"ts":"1689149906371"} 2023-07-12 08:18:26,372 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149906371"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149906371"}]},"ts":"1689149906371"} 2023-07-12 08:18:26,372 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149906371"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149906371"}]},"ts":"1689149906371"} 2023-07-12 08:18:26,375 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 08:18:26,377 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149906377"}]},"ts":"1689149906377"} 2023-07-12 08:18:26,378 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 08:18:26,383 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:26,383 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:26,383 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:26,383 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:26,384 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bbcdf6f604979a942b38826c25d6ced5, ASSIGN}, {pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6540f8afa5608b8573f19613e2917b3, ASSIGN}, {pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5e8f79cbb84b693e87ad11a0026d133, ASSIGN}, {pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=18dfe9345144bfc126a93d7f5f95137a, ASSIGN}, {pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64f609e53e2e896749c6edda653b25ad, ASSIGN}] 2023-07-12 08:18:26,386 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6540f8afa5608b8573f19613e2917b3, ASSIGN 2023-07-12 08:18:26,386 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64f609e53e2e896749c6edda653b25ad, ASSIGN 2023-07-12 08:18:26,386 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bbcdf6f604979a942b38826c25d6ced5, ASSIGN 2023-07-12 08:18:26,387 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=18dfe9345144bfc126a93d7f5f95137a, ASSIGN 2023-07-12 08:18:26,387 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5e8f79cbb84b693e87ad11a0026d133, ASSIGN 2023-07-12 08:18:26,388 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6540f8afa5608b8573f19613e2917b3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38647,1689149897534; forceNewPlan=false, retain=false 2023-07-12 08:18:26,388 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=18dfe9345144bfc126a93d7f5f95137a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36999,1689149897362; forceNewPlan=false, retain=false 2023-07-12 08:18:26,388 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5e8f79cbb84b693e87ad11a0026d133, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36999,1689149897362; forceNewPlan=false, retain=false 2023-07-12 08:18:26,388 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bbcdf6f604979a942b38826c25d6ced5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36999,1689149897362; forceNewPlan=false, retain=false 2023-07-12 08:18:26,388 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64f609e53e2e896749c6edda653b25ad, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38647,1689149897534; forceNewPlan=false, retain=false 2023-07-12 08:18:26,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 08:18:26,539 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 08:18:26,542 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=18dfe9345144bfc126a93d7f5f95137a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:26,542 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=f6540f8afa5608b8573f19613e2917b3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:26,542 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=c5e8f79cbb84b693e87ad11a0026d133, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:26,542 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=64f609e53e2e896749c6edda653b25ad, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:26,542 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149906542"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149906542"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149906542"}]},"ts":"1689149906542"} 2023-07-12 08:18:26,542 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=bbcdf6f604979a942b38826c25d6ced5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:26,543 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149906542"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149906542"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149906542"}]},"ts":"1689149906542"} 2023-07-12 08:18:26,543 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149906542"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149906542"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149906542"}]},"ts":"1689149906542"} 2023-07-12 08:18:26,542 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149906542"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149906542"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149906542"}]},"ts":"1689149906542"} 2023-07-12 08:18:26,542 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149906542"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149906542"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149906542"}]},"ts":"1689149906542"} 2023-07-12 08:18:26,545 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=58, state=RUNNABLE; OpenRegionProcedure c5e8f79cbb84b693e87ad11a0026d133, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:26,546 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=60, state=RUNNABLE; OpenRegionProcedure 64f609e53e2e896749c6edda653b25ad, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:26,549 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=56, state=RUNNABLE; OpenRegionProcedure bbcdf6f604979a942b38826c25d6ced5, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:26,552 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=59, state=RUNNABLE; OpenRegionProcedure 18dfe9345144bfc126a93d7f5f95137a, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:26,558 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=57, state=RUNNABLE; OpenRegionProcedure f6540f8afa5608b8573f19613e2917b3, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:26,702 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. 2023-07-12 08:18:26,703 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 18dfe9345144bfc126a93d7f5f95137a, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 08:18:26,703 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. 2023-07-12 08:18:26,703 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 64f609e53e2e896749c6edda653b25ad, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 08:18:26,703 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:26,703 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:26,703 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:26,703 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:26,703 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:26,703 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:26,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:26,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:26,705 INFO [StoreOpener-18dfe9345144bfc126a93d7f5f95137a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:26,705 INFO [StoreOpener-64f609e53e2e896749c6edda653b25ad-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:26,707 DEBUG [StoreOpener-64f609e53e2e896749c6edda653b25ad-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad/f 2023-07-12 08:18:26,707 DEBUG [StoreOpener-64f609e53e2e896749c6edda653b25ad-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad/f 2023-07-12 08:18:26,707 DEBUG [StoreOpener-18dfe9345144bfc126a93d7f5f95137a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a/f 2023-07-12 08:18:26,707 DEBUG [StoreOpener-18dfe9345144bfc126a93d7f5f95137a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a/f 2023-07-12 08:18:26,707 INFO [StoreOpener-64f609e53e2e896749c6edda653b25ad-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 64f609e53e2e896749c6edda653b25ad columnFamilyName f 2023-07-12 08:18:26,707 INFO [StoreOpener-18dfe9345144bfc126a93d7f5f95137a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 18dfe9345144bfc126a93d7f5f95137a columnFamilyName f 2023-07-12 08:18:26,708 INFO [StoreOpener-64f609e53e2e896749c6edda653b25ad-1] regionserver.HStore(310): Store=64f609e53e2e896749c6edda653b25ad/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:26,708 INFO [StoreOpener-18dfe9345144bfc126a93d7f5f95137a-1] regionserver.HStore(310): Store=18dfe9345144bfc126a93d7f5f95137a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:26,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:26,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:26,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:26,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:26,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:26,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:26,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:26,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:26,717 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 18dfe9345144bfc126a93d7f5f95137a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11371923360, jitterRate=0.059092894196510315}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:26,717 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 64f609e53e2e896749c6edda653b25ad; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9805706720, jitterRate=-0.08677239716053009}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:26,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 18dfe9345144bfc126a93d7f5f95137a: 2023-07-12 08:18:26,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 64f609e53e2e896749c6edda653b25ad: 2023-07-12 08:18:26,718 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad., pid=62, masterSystemTime=1689149906699 2023-07-12 08:18:26,718 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a., pid=64, masterSystemTime=1689149906697 2023-07-12 08:18:26,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. 2023-07-12 08:18:26,720 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. 2023-07-12 08:18:26,720 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. 2023-07-12 08:18:26,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6540f8afa5608b8573f19613e2917b3, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 08:18:26,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:26,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:26,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:26,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:26,722 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=64f609e53e2e896749c6edda653b25ad, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:26,722 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149906722"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149906722"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149906722"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149906722"}]},"ts":"1689149906722"} 2023-07-12 08:18:26,724 INFO [StoreOpener-f6540f8afa5608b8573f19613e2917b3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:26,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. 2023-07-12 08:18:26,725 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. 2023-07-12 08:18:26,726 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. 2023-07-12 08:18:26,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bbcdf6f604979a942b38826c25d6ced5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 08:18:26,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:26,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:26,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:26,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:26,727 DEBUG [StoreOpener-f6540f8afa5608b8573f19613e2917b3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3/f 2023-07-12 08:18:26,727 DEBUG [StoreOpener-f6540f8afa5608b8573f19613e2917b3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3/f 2023-07-12 08:18:26,727 INFO [StoreOpener-f6540f8afa5608b8573f19613e2917b3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6540f8afa5608b8573f19613e2917b3 columnFamilyName f 2023-07-12 08:18:26,728 INFO [StoreOpener-f6540f8afa5608b8573f19613e2917b3-1] regionserver.HStore(310): Store=f6540f8afa5608b8573f19613e2917b3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:26,728 INFO [StoreOpener-bbcdf6f604979a942b38826c25d6ced5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:26,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:26,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:26,730 DEBUG [StoreOpener-bbcdf6f604979a942b38826c25d6ced5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5/f 2023-07-12 08:18:26,730 DEBUG [StoreOpener-bbcdf6f604979a942b38826c25d6ced5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5/f 2023-07-12 08:18:26,730 INFO [StoreOpener-bbcdf6f604979a942b38826c25d6ced5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bbcdf6f604979a942b38826c25d6ced5 columnFamilyName f 2023-07-12 08:18:26,731 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=18dfe9345144bfc126a93d7f5f95137a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:26,731 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149906731"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149906731"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149906731"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149906731"}]},"ts":"1689149906731"} 2023-07-12 08:18:26,731 INFO [StoreOpener-bbcdf6f604979a942b38826c25d6ced5-1] regionserver.HStore(310): Store=bbcdf6f604979a942b38826c25d6ced5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:26,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:26,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:26,733 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=60 2023-07-12 08:18:26,733 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; OpenRegionProcedure 64f609e53e2e896749c6edda653b25ad, server=jenkins-hbase4.apache.org,38647,1689149897534 in 178 msec 2023-07-12 08:18:26,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:26,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64f609e53e2e896749c6edda653b25ad, ASSIGN in 350 msec 2023-07-12 08:18:26,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:26,736 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f6540f8afa5608b8573f19613e2917b3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9921908480, jitterRate=-0.0759502649307251}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:26,736 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f6540f8afa5608b8573f19613e2917b3: 2023-07-12 08:18:26,736 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=59 2023-07-12 08:18:26,736 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=59, state=SUCCESS; OpenRegionProcedure 18dfe9345144bfc126a93d7f5f95137a, server=jenkins-hbase4.apache.org,36999,1689149897362 in 181 msec 2023-07-12 08:18:26,737 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3., pid=65, masterSystemTime=1689149906699 2023-07-12 08:18:26,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:26,738 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=18dfe9345144bfc126a93d7f5f95137a, ASSIGN in 353 msec 2023-07-12 08:18:26,738 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. 2023-07-12 08:18:26,739 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. 2023-07-12 08:18:26,739 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=f6540f8afa5608b8573f19613e2917b3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:26,739 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149906739"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149906739"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149906739"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149906739"}]},"ts":"1689149906739"} 2023-07-12 08:18:26,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:26,740 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bbcdf6f604979a942b38826c25d6ced5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11361614880, jitterRate=0.0581328421831131}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:26,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bbcdf6f604979a942b38826c25d6ced5: 2023-07-12 08:18:26,741 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5., pid=63, masterSystemTime=1689149906697 2023-07-12 08:18:26,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. 2023-07-12 08:18:26,743 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. 2023-07-12 08:18:26,743 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. 2023-07-12 08:18:26,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c5e8f79cbb84b693e87ad11a0026d133, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 08:18:26,743 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=bbcdf6f604979a942b38826c25d6ced5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:26,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:26,743 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149906743"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149906743"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149906743"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149906743"}]},"ts":"1689149906743"} 2023-07-12 08:18:26,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:26,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:26,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:26,744 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=57 2023-07-12 08:18:26,744 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=57, state=SUCCESS; OpenRegionProcedure f6540f8afa5608b8573f19613e2917b3, server=jenkins-hbase4.apache.org,38647,1689149897534 in 185 msec 2023-07-12 08:18:26,745 INFO [StoreOpener-c5e8f79cbb84b693e87ad11a0026d133-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:26,746 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6540f8afa5608b8573f19613e2917b3, ASSIGN in 361 msec 2023-07-12 08:18:26,747 DEBUG [StoreOpener-c5e8f79cbb84b693e87ad11a0026d133-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133/f 2023-07-12 08:18:26,747 DEBUG [StoreOpener-c5e8f79cbb84b693e87ad11a0026d133-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133/f 2023-07-12 08:18:26,748 INFO [StoreOpener-c5e8f79cbb84b693e87ad11a0026d133-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c5e8f79cbb84b693e87ad11a0026d133 columnFamilyName f 2023-07-12 08:18:26,749 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=56 2023-07-12 08:18:26,749 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=56, state=SUCCESS; OpenRegionProcedure bbcdf6f604979a942b38826c25d6ced5, server=jenkins-hbase4.apache.org,36999,1689149897362 in 196 msec 2023-07-12 08:18:26,749 INFO [StoreOpener-c5e8f79cbb84b693e87ad11a0026d133-1] regionserver.HStore(310): Store=c5e8f79cbb84b693e87ad11a0026d133/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:26,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:26,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:26,751 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bbcdf6f604979a942b38826c25d6ced5, ASSIGN in 366 msec 2023-07-12 08:18:26,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:26,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:26,756 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c5e8f79cbb84b693e87ad11a0026d133; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11973431040, jitterRate=0.11511266231536865}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:26,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c5e8f79cbb84b693e87ad11a0026d133: 2023-07-12 08:18:26,757 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133., pid=61, masterSystemTime=1689149906697 2023-07-12 08:18:26,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. 2023-07-12 08:18:26,759 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. 2023-07-12 08:18:26,760 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=c5e8f79cbb84b693e87ad11a0026d133, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:26,760 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149906760"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149906760"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149906760"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149906760"}]},"ts":"1689149906760"} 2023-07-12 08:18:26,764 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=58 2023-07-12 08:18:26,764 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=58, state=SUCCESS; OpenRegionProcedure c5e8f79cbb84b693e87ad11a0026d133, server=jenkins-hbase4.apache.org,36999,1689149897362 in 216 msec 2023-07-12 08:18:26,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=55 2023-07-12 08:18:26,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5e8f79cbb84b693e87ad11a0026d133, ASSIGN in 381 msec 2023-07-12 08:18:26,766 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149906766"}]},"ts":"1689149906766"} 2023-07-12 08:18:26,767 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 08:18:26,769 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-12 08:18:26,771 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=55, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.5030 sec 2023-07-12 08:18:27,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 08:18:27,434 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 55 completed 2023-07-12 08:18:27,436 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:27,436 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:27,437 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:27,437 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:27,438 INFO [Listener at localhost/44853] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:27,439 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:27,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:27,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-12 08:18:27,447 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149907447"}]},"ts":"1689149907447"} 2023-07-12 08:18:27,448 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 08:18:27,451 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 08:18:27,452 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bbcdf6f604979a942b38826c25d6ced5, UNASSIGN}, {pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6540f8afa5608b8573f19613e2917b3, UNASSIGN}, {pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5e8f79cbb84b693e87ad11a0026d133, UNASSIGN}, {pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=18dfe9345144bfc126a93d7f5f95137a, UNASSIGN}, {pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64f609e53e2e896749c6edda653b25ad, UNASSIGN}] 2023-07-12 08:18:27,454 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5e8f79cbb84b693e87ad11a0026d133, UNASSIGN 2023-07-12 08:18:27,454 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bbcdf6f604979a942b38826c25d6ced5, UNASSIGN 2023-07-12 08:18:27,455 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=18dfe9345144bfc126a93d7f5f95137a, UNASSIGN 2023-07-12 08:18:27,455 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6540f8afa5608b8573f19613e2917b3, UNASSIGN 2023-07-12 08:18:27,455 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64f609e53e2e896749c6edda653b25ad, UNASSIGN 2023-07-12 08:18:27,456 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=c5e8f79cbb84b693e87ad11a0026d133, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:27,456 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=18dfe9345144bfc126a93d7f5f95137a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:27,456 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149907456"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149907456"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149907456"}]},"ts":"1689149907456"} 2023-07-12 08:18:27,456 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=bbcdf6f604979a942b38826c25d6ced5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:27,456 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149907456"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149907456"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149907456"}]},"ts":"1689149907456"} 2023-07-12 08:18:27,456 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149907456"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149907456"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149907456"}]},"ts":"1689149907456"} 2023-07-12 08:18:27,456 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=f6540f8afa5608b8573f19613e2917b3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:27,457 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149907456"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149907456"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149907456"}]},"ts":"1689149907456"} 2023-07-12 08:18:27,457 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=64f609e53e2e896749c6edda653b25ad, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:27,457 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149907457"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149907457"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149907457"}]},"ts":"1689149907457"} 2023-07-12 08:18:27,458 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=69, state=RUNNABLE; CloseRegionProcedure c5e8f79cbb84b693e87ad11a0026d133, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:27,459 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=67, state=RUNNABLE; CloseRegionProcedure bbcdf6f604979a942b38826c25d6ced5, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:27,461 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=70, state=RUNNABLE; CloseRegionProcedure 18dfe9345144bfc126a93d7f5f95137a, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:27,463 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=68, state=RUNNABLE; CloseRegionProcedure f6540f8afa5608b8573f19613e2917b3, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:27,464 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=71, state=RUNNABLE; CloseRegionProcedure 64f609e53e2e896749c6edda653b25ad, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:27,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-12 08:18:27,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:27,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c5e8f79cbb84b693e87ad11a0026d133, disabling compactions & flushes 2023-07-12 08:18:27,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. 2023-07-12 08:18:27,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. 2023-07-12 08:18:27,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. after waiting 0 ms 2023-07-12 08:18:27,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. 2023-07-12 08:18:27,617 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:27,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f6540f8afa5608b8573f19613e2917b3, disabling compactions & flushes 2023-07-12 08:18:27,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. 2023-07-12 08:18:27,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. 2023-07-12 08:18:27,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. after waiting 0 ms 2023-07-12 08:18:27,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. 2023-07-12 08:18:27,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:27,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133. 2023-07-12 08:18:27,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c5e8f79cbb84b693e87ad11a0026d133: 2023-07-12 08:18:27,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:27,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:27,624 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=c5e8f79cbb84b693e87ad11a0026d133, regionState=CLOSED 2023-07-12 08:18:27,624 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149907624"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149907624"}]},"ts":"1689149907624"} 2023-07-12 08:18:27,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bbcdf6f604979a942b38826c25d6ced5, disabling compactions & flushes 2023-07-12 08:18:27,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. 2023-07-12 08:18:27,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. 2023-07-12 08:18:27,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. after waiting 0 ms 2023-07-12 08:18:27,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. 2023-07-12 08:18:27,629 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=69 2023-07-12 08:18:27,629 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; CloseRegionProcedure c5e8f79cbb84b693e87ad11a0026d133, server=jenkins-hbase4.apache.org,36999,1689149897362 in 168 msec 2023-07-12 08:18:27,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:27,632 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3. 2023-07-12 08:18:27,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f6540f8afa5608b8573f19613e2917b3: 2023-07-12 08:18:27,634 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5e8f79cbb84b693e87ad11a0026d133, UNASSIGN in 178 msec 2023-07-12 08:18:27,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:27,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:27,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 64f609e53e2e896749c6edda653b25ad, disabling compactions & flushes 2023-07-12 08:18:27,635 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. 2023-07-12 08:18:27,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. 2023-07-12 08:18:27,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. after waiting 0 ms 2023-07-12 08:18:27,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. 2023-07-12 08:18:27,635 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=f6540f8afa5608b8573f19613e2917b3, regionState=CLOSED 2023-07-12 08:18:27,635 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149907635"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149907635"}]},"ts":"1689149907635"} 2023-07-12 08:18:27,639 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=68 2023-07-12 08:18:27,639 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=68, state=SUCCESS; CloseRegionProcedure f6540f8afa5608b8573f19613e2917b3, server=jenkins-hbase4.apache.org,38647,1689149897534 in 174 msec 2023-07-12 08:18:27,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:27,641 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5. 2023-07-12 08:18:27,641 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bbcdf6f604979a942b38826c25d6ced5: 2023-07-12 08:18:27,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:27,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:27,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 18dfe9345144bfc126a93d7f5f95137a, disabling compactions & flushes 2023-07-12 08:18:27,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. 2023-07-12 08:18:27,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. 2023-07-12 08:18:27,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. after waiting 0 ms 2023-07-12 08:18:27,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. 2023-07-12 08:18:27,646 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6540f8afa5608b8573f19613e2917b3, UNASSIGN in 188 msec 2023-07-12 08:18:27,647 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=bbcdf6f604979a942b38826c25d6ced5, regionState=CLOSED 2023-07-12 08:18:27,647 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149907647"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149907647"}]},"ts":"1689149907647"} 2023-07-12 08:18:27,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:27,650 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad. 2023-07-12 08:18:27,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 64f609e53e2e896749c6edda653b25ad: 2023-07-12 08:18:27,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:27,652 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=67 2023-07-12 08:18:27,652 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=67, state=SUCCESS; CloseRegionProcedure bbcdf6f604979a942b38826c25d6ced5, server=jenkins-hbase4.apache.org,36999,1689149897362 in 190 msec 2023-07-12 08:18:27,652 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=64f609e53e2e896749c6edda653b25ad, regionState=CLOSED 2023-07-12 08:18:27,653 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689149907652"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149907652"}]},"ts":"1689149907652"} 2023-07-12 08:18:27,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:27,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a. 2023-07-12 08:18:27,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 18dfe9345144bfc126a93d7f5f95137a: 2023-07-12 08:18:27,656 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bbcdf6f604979a942b38826c25d6ced5, UNASSIGN in 201 msec 2023-07-12 08:18:27,656 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:27,657 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=18dfe9345144bfc126a93d7f5f95137a, regionState=CLOSED 2023-07-12 08:18:27,657 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689149907657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149907657"}]},"ts":"1689149907657"} 2023-07-12 08:18:27,659 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=71 2023-07-12 08:18:27,659 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=71, state=SUCCESS; CloseRegionProcedure 64f609e53e2e896749c6edda653b25ad, server=jenkins-hbase4.apache.org,38647,1689149897534 in 191 msec 2023-07-12 08:18:27,661 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64f609e53e2e896749c6edda653b25ad, UNASSIGN in 208 msec 2023-07-12 08:18:27,662 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=70 2023-07-12 08:18:27,662 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=70, state=SUCCESS; CloseRegionProcedure 18dfe9345144bfc126a93d7f5f95137a, server=jenkins-hbase4.apache.org,36999,1689149897362 in 198 msec 2023-07-12 08:18:27,665 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=66 2023-07-12 08:18:27,665 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=18dfe9345144bfc126a93d7f5f95137a, UNASSIGN in 211 msec 2023-07-12 08:18:27,666 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149907665"}]},"ts":"1689149907665"} 2023-07-12 08:18:27,673 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 08:18:27,675 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 08:18:27,677 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 237 msec 2023-07-12 08:18:27,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-12 08:18:27,750 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 66 completed 2023-07-12 08:18:27,757 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:27,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:27,768 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:27,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_169286504' 2023-07-12 08:18:27,770 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:27,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:27,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:27,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:27,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:27,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-12 08:18:27,785 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:27,785 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:27,785 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:27,785 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:27,786 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:27,788 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3/recovered.edits] 2023-07-12 08:18:27,789 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5/recovered.edits] 2023-07-12 08:18:27,789 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad/recovered.edits] 2023-07-12 08:18:27,789 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133/recovered.edits] 2023-07-12 08:18:27,789 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a/recovered.edits] 2023-07-12 08:18:27,810 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5/recovered.edits/4.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5/recovered.edits/4.seqid 2023-07-12 08:18:27,810 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133/recovered.edits/4.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133/recovered.edits/4.seqid 2023-07-12 08:18:27,810 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3/recovered.edits/4.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3/recovered.edits/4.seqid 2023-07-12 08:18:27,810 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad/recovered.edits/4.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad/recovered.edits/4.seqid 2023-07-12 08:18:27,811 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a/recovered.edits/4.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a/recovered.edits/4.seqid 2023-07-12 08:18:27,811 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bbcdf6f604979a942b38826c25d6ced5 2023-07-12 08:18:27,812 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6540f8afa5608b8573f19613e2917b3 2023-07-12 08:18:27,812 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5e8f79cbb84b693e87ad11a0026d133 2023-07-12 08:18:27,812 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64f609e53e2e896749c6edda653b25ad 2023-07-12 08:18:27,812 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/18dfe9345144bfc126a93d7f5f95137a 2023-07-12 08:18:27,812 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 08:18:27,815 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:27,822 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 08:18:27,825 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 08:18:27,827 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:27,827 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 08:18:27,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149907827"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:27,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149907827"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:27,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149907827"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:27,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149907827"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:27,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149907827"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:27,830 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 08:18:27,830 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => bbcdf6f604979a942b38826c25d6ced5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689149905379.bbcdf6f604979a942b38826c25d6ced5.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => f6540f8afa5608b8573f19613e2917b3, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689149905379.f6540f8afa5608b8573f19613e2917b3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => c5e8f79cbb84b693e87ad11a0026d133, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689149905379.c5e8f79cbb84b693e87ad11a0026d133.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 18dfe9345144bfc126a93d7f5f95137a, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689149905379.18dfe9345144bfc126a93d7f5f95137a.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 64f609e53e2e896749c6edda653b25ad, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689149905379.64f609e53e2e896749c6edda653b25ad.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 08:18:27,830 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 08:18:27,831 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689149907830"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:27,833 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 08:18:27,836 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 08:18:27,837 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 77 msec 2023-07-12 08:18:27,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-12 08:18:27,884 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 77 completed 2023-07-12 08:18:27,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:27,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:27,889 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38647] ipc.CallRunner(144): callId: 162 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:33522 deadline: 1689149967889, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41817 startCode=1689149901106. As of locationSeqNum=6. 2023-07-12 08:18:28,000 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,000 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,002 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:28,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:28,002 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:28,004 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:36999] to rsgroup default 2023-07-12 08:18:28,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:28,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:28,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:28,010 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_169286504, current retry=0 2023-07-12 08:18:28,010 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362, jenkins-hbase4.apache.org,38647,1689149897534] are moved back to Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:28,010 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_169286504 => default 2023-07-12 08:18:28,010 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:28,017 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_169286504 2023-07-12 08:18:28,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:28,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 08:18:28,028 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:28,029 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:28,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:28,029 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:28,030 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:28,030 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:28,031 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:28,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:28,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:28,041 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:28,042 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:28,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:28,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:28,048 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:28,052 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,055 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:28,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:28,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 148 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151108055, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:28,056 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:28,060 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:28,061 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,061 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,061 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:28,062 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:28,062 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:28,093 INFO [Listener at localhost/44853] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=493 (was 419) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1887393900-172.31.14.131-1689149891542:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2011584611_17 at /127.0.0.1:40392 [Receiving block BP-1887393900-172.31.14.131-1689149891542:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2011584611_17 at /127.0.0.1:50092 [Receiving block BP-1887393900-172.31.14.131-1689149891542:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:41817 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51057@0x00af7513-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp911407158-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2011584611_17 at /127.0.0.1:51662 [Receiving block BP-1887393900-172.31.14.131-1689149891542:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp911407158-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp911407158-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp911407158-632 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:41817-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp911407158-635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp911407158-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51057@0x00af7513-SendThread(127.0.0.1:51057) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp911407158-633-acceptor-0@67f6035b-ServerConnector@7adb5e78{HTTP/1.1, (http/1.1)}{0.0.0.0:38153} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41817Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_263165551_17 at /127.0.0.1:40418 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1405276520_17 at /127.0.0.1:51700 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1887393900-172.31.14.131-1689149891542:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51057@0x00af7513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1522815941.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:42813 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9-prefix:jenkins-hbase4.apache.org,41817,1689149901106 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:42813 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1405276520_17 at /127.0.0.1:50248 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1887393900-172.31.14.131-1689149891542:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6ac6849-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp911407158-634 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=776 (was 673) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 571), ProcessCount=174 (was 173) - ProcessCount LEAK? -, AvailableMemoryMB=4070 (was 4613) 2023-07-12 08:18:28,111 INFO [Listener at localhost/44853] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=493, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=528, ProcessCount=174, AvailableMemoryMB=4067 2023-07-12 08:18:28,111 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-12 08:18:28,119 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,119 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,120 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:28,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:28,120 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:28,121 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:28,121 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:28,123 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:28,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:28,129 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:28,133 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:28,134 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:28,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:28,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:28,141 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:28,145 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,145 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,147 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:28,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:28,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 176 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151108147, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:28,149 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:28,151 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:28,152 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,152 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,153 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:28,154 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:28,154 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:28,155 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-12 08:18:28,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:28,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:58548 deadline: 1689151108155, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 08:18:28,157 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-12 08:18:28,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:28,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:58548 deadline: 1689151108157, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 08:18:28,160 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-12 08:18:28,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:28,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 186 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:58548 deadline: 1689151108159, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 08:18:28,161 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-12 08:18:28,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-12 08:18:28,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:28,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:28,175 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:28,179 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,179 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,186 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,186 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,188 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:28,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:28,188 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:28,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:28,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:28,190 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-12 08:18:28,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:28,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 08:18:28,197 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:28,199 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:28,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:28,199 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:28,200 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:28,200 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:28,201 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:28,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:28,209 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:28,213 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:28,214 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:28,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:28,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:28,223 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:28,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,236 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:28,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:28,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 220 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151108236, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:28,237 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:28,240 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:28,241 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,241 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,241 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:28,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:28,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:28,264 INFO [Listener at localhost/44853] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=496 (was 493) Potentially hanging thread: hconnection-0x62be270e-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=776 (was 776), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 528), ProcessCount=174 (was 174), AvailableMemoryMB=4057 (was 4067) 2023-07-12 08:18:28,281 INFO [Listener at localhost/44853] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=496, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=528, ProcessCount=174, AvailableMemoryMB=4056 2023-07-12 08:18:28,281 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-12 08:18:28,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,287 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:28,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:28,287 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:28,288 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:28,288 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:28,289 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:28,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:28,299 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:28,303 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:28,304 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:28,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:28,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:28,311 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:28,326 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,327 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,329 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:28,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:28,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 248 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151108329, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:28,330 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:28,332 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:28,333 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,333 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,334 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:28,335 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:28,335 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:28,336 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,336 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,337 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:28,337 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:28,338 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-12 08:18:28,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 08:18:28,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:28,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:28,346 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:28,357 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:28,357 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:28,360 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:41817] to rsgroup bar 2023-07-12 08:18:28,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:28,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 08:18:28,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:28,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:28,366 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(238): Moving server region ae71929909c3f585c1f0e7f3408f83d2, which do not belong to RSGroup bar 2023-07-12 08:18:28,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, REOPEN/MOVE 2023-07-12 08:18:28,367 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(238): Moving server region e819f13729c8274f2f0efb5a42e75184, which do not belong to RSGroup bar 2023-07-12 08:18:28,368 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, REOPEN/MOVE 2023-07-12 08:18:28,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, REOPEN/MOVE 2023-07-12 08:18:28,373 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=ae71929909c3f585c1f0e7f3408f83d2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:28,373 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-12 08:18:28,373 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149908372"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149908372"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149908372"}]},"ts":"1689149908372"} 2023-07-12 08:18:28,374 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, REOPEN/MOVE 2023-07-12 08:18:28,376 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=e819f13729c8274f2f0efb5a42e75184, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:28,376 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149908376"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149908376"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149908376"}]},"ts":"1689149908376"} 2023-07-12 08:18:28,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; CloseRegionProcedure ae71929909c3f585c1f0e7f3408f83d2, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:28,378 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=81, ppid=79, state=RUNNABLE; CloseRegionProcedure e819f13729c8274f2f0efb5a42e75184, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:28,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:28,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e819f13729c8274f2f0efb5a42e75184, disabling compactions & flushes 2023-07-12 08:18:28,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:28,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:28,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. after waiting 0 ms 2023-07-12 08:18:28,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:28,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e819f13729c8274f2f0efb5a42e75184 1/1 column families, dataSize=4.98 KB heapSize=8.39 KB 2023-07-12 08:18:28,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.98 KB at sequenceid=32 (bloomFilter=true), to=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/.tmp/m/24863ffb0d764ff2b4d046821226664f 2023-07-12 08:18:28,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 24863ffb0d764ff2b4d046821226664f 2023-07-12 08:18:28,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/.tmp/m/24863ffb0d764ff2b4d046821226664f as hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m/24863ffb0d764ff2b4d046821226664f 2023-07-12 08:18:28,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 24863ffb0d764ff2b4d046821226664f 2023-07-12 08:18:28,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m/24863ffb0d764ff2b4d046821226664f, entries=9, sequenceid=32, filesize=5.5 K 2023-07-12 08:18:28,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.98 KB/5100, heapSize ~8.38 KB/8576, currentSize=0 B/0 for e819f13729c8274f2f0efb5a42e75184 in 72ms, sequenceid=32, compaction requested=false 2023-07-12 08:18:28,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/recovered.edits/35.seqid, newMaxSeqId=35, maxSeqId=12 2023-07-12 08:18:28,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 08:18:28,633 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:28,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e819f13729c8274f2f0efb5a42e75184: 2023-07-12 08:18:28,633 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e819f13729c8274f2f0efb5a42e75184 move to jenkins-hbase4.apache.org,42347,1689149897465 record at close sequenceid=32 2023-07-12 08:18:28,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:28,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:28,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ae71929909c3f585c1f0e7f3408f83d2, disabling compactions & flushes 2023-07-12 08:18:28,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:28,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:28,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. after waiting 0 ms 2023-07-12 08:18:28,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:28,637 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=e819f13729c8274f2f0efb5a42e75184, regionState=CLOSED 2023-07-12 08:18:28,637 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149908637"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149908637"}]},"ts":"1689149908637"} 2023-07-12 08:18:28,642 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=81, resume processing ppid=79 2023-07-12 08:18:28,643 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=81, ppid=79, state=SUCCESS; CloseRegionProcedure e819f13729c8274f2f0efb5a42e75184, server=jenkins-hbase4.apache.org,41817,1689149901106 in 262 msec 2023-07-12 08:18:28,643 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42347,1689149897465; forceNewPlan=false, retain=false 2023-07-12 08:18:28,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-12 08:18:28,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:28,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ae71929909c3f585c1f0e7f3408f83d2: 2023-07-12 08:18:28,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ae71929909c3f585c1f0e7f3408f83d2 move to jenkins-hbase4.apache.org,42347,1689149897465 record at close sequenceid=10 2023-07-12 08:18:28,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:28,663 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=ae71929909c3f585c1f0e7f3408f83d2, regionState=CLOSED 2023-07-12 08:18:28,663 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149908663"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149908663"}]},"ts":"1689149908663"} 2023-07-12 08:18:28,667 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-12 08:18:28,667 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; CloseRegionProcedure ae71929909c3f585c1f0e7f3408f83d2, server=jenkins-hbase4.apache.org,41817,1689149901106 in 288 msec 2023-07-12 08:18:28,667 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42347,1689149897465; forceNewPlan=false, retain=false 2023-07-12 08:18:28,668 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=e819f13729c8274f2f0efb5a42e75184, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:28,668 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149908668"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149908668"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149908668"}]},"ts":"1689149908668"} 2023-07-12 08:18:28,669 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=ae71929909c3f585c1f0e7f3408f83d2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:28,669 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149908669"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149908669"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149908669"}]},"ts":"1689149908669"} 2023-07-12 08:18:28,670 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=79, state=RUNNABLE; OpenRegionProcedure e819f13729c8274f2f0efb5a42e75184, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:28,671 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=78, state=RUNNABLE; OpenRegionProcedure ae71929909c3f585c1f0e7f3408f83d2, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:28,826 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:28,827 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e819f13729c8274f2f0efb5a42e75184, NAME => 'hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:28,827 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 08:18:28,827 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. service=MultiRowMutationService 2023-07-12 08:18:28,827 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 08:18:28,827 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:28,827 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:28,827 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:28,827 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:28,829 INFO [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:28,831 DEBUG [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m 2023-07-12 08:18:28,831 DEBUG [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m 2023-07-12 08:18:28,831 INFO [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e819f13729c8274f2f0efb5a42e75184 columnFamilyName m 2023-07-12 08:18:28,841 DEBUG [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] regionserver.HStore(539): loaded hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m/1f73483a707e46ffb81452ad99f50cbc 2023-07-12 08:18:28,846 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 24863ffb0d764ff2b4d046821226664f 2023-07-12 08:18:28,847 DEBUG [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] regionserver.HStore(539): loaded hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m/24863ffb0d764ff2b4d046821226664f 2023-07-12 08:18:28,847 INFO [StoreOpener-e819f13729c8274f2f0efb5a42e75184-1] regionserver.HStore(310): Store=e819f13729c8274f2f0efb5a42e75184/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:28,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:28,849 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:28,854 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:28,855 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e819f13729c8274f2f0efb5a42e75184; next sequenceid=36; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5fbae1c1, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:28,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e819f13729c8274f2f0efb5a42e75184: 2023-07-12 08:18:28,856 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184., pid=82, masterSystemTime=1689149908822 2023-07-12 08:18:28,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:28,858 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:28,858 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:28,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ae71929909c3f585c1f0e7f3408f83d2, NAME => 'hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:28,858 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=e819f13729c8274f2f0efb5a42e75184, regionState=OPEN, openSeqNum=36, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:28,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:28,858 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149908858"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149908858"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149908858"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149908858"}]},"ts":"1689149908858"} 2023-07-12 08:18:28,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:28,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:28,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:28,860 INFO [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:28,862 DEBUG [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/info 2023-07-12 08:18:28,862 DEBUG [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/info 2023-07-12 08:18:28,863 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=79 2023-07-12 08:18:28,863 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=79, state=SUCCESS; OpenRegionProcedure e819f13729c8274f2f0efb5a42e75184, server=jenkins-hbase4.apache.org,42347,1689149897465 in 190 msec 2023-07-12 08:18:28,863 INFO [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ae71929909c3f585c1f0e7f3408f83d2 columnFamilyName info 2023-07-12 08:18:28,864 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e819f13729c8274f2f0efb5a42e75184, REOPEN/MOVE in 495 msec 2023-07-12 08:18:28,876 DEBUG [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] regionserver.HStore(539): loaded hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/info/57474fb50e604a99bb4c089b35db0e64 2023-07-12 08:18:28,876 INFO [StoreOpener-ae71929909c3f585c1f0e7f3408f83d2-1] regionserver.HStore(310): Store=ae71929909c3f585c1f0e7f3408f83d2/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:28,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:28,878 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:28,882 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:28,883 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ae71929909c3f585c1f0e7f3408f83d2; next sequenceid=13; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11342087840, jitterRate=0.056314244866371155}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:28,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ae71929909c3f585c1f0e7f3408f83d2: 2023-07-12 08:18:28,884 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2., pid=83, masterSystemTime=1689149908822 2023-07-12 08:18:28,886 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:28,886 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:28,886 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=ae71929909c3f585c1f0e7f3408f83d2, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:28,887 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149908886"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149908886"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149908886"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149908886"}]},"ts":"1689149908886"} 2023-07-12 08:18:28,890 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=78 2023-07-12 08:18:28,890 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=78, state=SUCCESS; OpenRegionProcedure ae71929909c3f585c1f0e7f3408f83d2, server=jenkins-hbase4.apache.org,42347,1689149897465 in 217 msec 2023-07-12 08:18:28,908 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ae71929909c3f585c1f0e7f3408f83d2, REOPEN/MOVE in 524 msec 2023-07-12 08:18:29,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-12 08:18:29,373 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362, jenkins-hbase4.apache.org,38647,1689149897534, jenkins-hbase4.apache.org,41817,1689149901106] are moved back to default 2023-07-12 08:18:29,373 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-12 08:18:29,373 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:29,375 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41817] ipc.CallRunner(144): callId: 13 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:60850 deadline: 1689149969375, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42347 startCode=1689149897465. As of locationSeqNum=32. 2023-07-12 08:18:29,490 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:29,490 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:29,494 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-12 08:18:29,495 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:29,497 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:29,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-12 08:18:29,501 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:29,501 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 84 2023-07-12 08:18:29,502 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41817] ipc.CallRunner(144): callId: 198 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:60866 deadline: 1689149969502, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42347 startCode=1689149897465. As of locationSeqNum=32. 2023-07-12 08:18:29,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 08:18:29,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 08:18:29,612 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:29,613 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 08:18:29,613 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:29,614 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:29,617 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:29,619 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:29,620 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810 empty. 2023-07-12 08:18:29,621 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:29,621 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 08:18:29,655 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:29,659 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2768af643112dbe301204e6f49c2d810, NAME => 'Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:29,683 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:29,683 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 2768af643112dbe301204e6f49c2d810, disabling compactions & flushes 2023-07-12 08:18:29,683 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:29,683 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:29,683 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. after waiting 0 ms 2023-07-12 08:18:29,683 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:29,683 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:29,683 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 2768af643112dbe301204e6f49c2d810: 2023-07-12 08:18:29,686 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:29,687 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149909687"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149909687"}]},"ts":"1689149909687"} 2023-07-12 08:18:29,689 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:29,690 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:29,690 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149909690"}]},"ts":"1689149909690"} 2023-07-12 08:18:29,691 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-12 08:18:29,695 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, ASSIGN}] 2023-07-12 08:18:29,697 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, ASSIGN 2023-07-12 08:18:29,698 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42347,1689149897465; forceNewPlan=false, retain=false 2023-07-12 08:18:29,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 08:18:29,850 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:29,850 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149909850"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149909850"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149909850"}]},"ts":"1689149909850"} 2023-07-12 08:18:29,852 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; OpenRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:30,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:30,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2768af643112dbe301204e6f49c2d810, NAME => 'Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:30,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:30,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,011 INFO [StoreOpener-2768af643112dbe301204e6f49c2d810-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,013 DEBUG [StoreOpener-2768af643112dbe301204e6f49c2d810-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/f 2023-07-12 08:18:30,013 DEBUG [StoreOpener-2768af643112dbe301204e6f49c2d810-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/f 2023-07-12 08:18:30,014 INFO [StoreOpener-2768af643112dbe301204e6f49c2d810-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2768af643112dbe301204e6f49c2d810 columnFamilyName f 2023-07-12 08:18:30,014 INFO [StoreOpener-2768af643112dbe301204e6f49c2d810-1] regionserver.HStore(310): Store=2768af643112dbe301204e6f49c2d810/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:30,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:30,023 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2768af643112dbe301204e6f49c2d810; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11129958720, jitterRate=0.036558181047439575}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:30,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2768af643112dbe301204e6f49c2d810: 2023-07-12 08:18:30,025 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810., pid=86, masterSystemTime=1689149910004 2023-07-12 08:18:30,027 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:30,027 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:30,028 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:30,028 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149910028"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149910028"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149910028"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149910028"}]},"ts":"1689149910028"} 2023-07-12 08:18:30,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-12 08:18:30,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; OpenRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,42347,1689149897465 in 178 msec 2023-07-12 08:18:30,033 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-12 08:18:30,033 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, ASSIGN in 336 msec 2023-07-12 08:18:30,034 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:30,034 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149910034"}]},"ts":"1689149910034"} 2023-07-12 08:18:30,036 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-12 08:18:30,038 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:30,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 541 msec 2023-07-12 08:18:30,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 08:18:30,107 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-12 08:18:30,107 DEBUG [Listener at localhost/44853] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-12 08:18:30,107 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:30,112 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-12 08:18:30,113 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:30,113 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-12 08:18:30,115 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-12 08:18:30,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:30,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 08:18:30,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:30,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:30,132 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-12 08:18:30,132 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region 2768af643112dbe301204e6f49c2d810 to RSGroup bar 2023-07-12 08:18:30,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:30,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:30,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:30,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:30,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 08:18:30,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:30,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, REOPEN/MOVE 2023-07-12 08:18:30,134 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-12 08:18:30,135 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, REOPEN/MOVE 2023-07-12 08:18:30,136 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:30,136 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149910136"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149910136"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149910136"}]},"ts":"1689149910136"} 2023-07-12 08:18:30,140 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:30,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2768af643112dbe301204e6f49c2d810, disabling compactions & flushes 2023-07-12 08:18:30,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:30,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:30,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. after waiting 0 ms 2023-07-12 08:18:30,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:30,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:30,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:30,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2768af643112dbe301204e6f49c2d810: 2023-07-12 08:18:30,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2768af643112dbe301204e6f49c2d810 move to jenkins-hbase4.apache.org,36999,1689149897362 record at close sequenceid=2 2023-07-12 08:18:30,305 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,305 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=CLOSED 2023-07-12 08:18:30,305 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149910305"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149910305"}]},"ts":"1689149910305"} 2023-07-12 08:18:30,308 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-12 08:18:30,309 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,42347,1689149897465 in 169 msec 2023-07-12 08:18:30,310 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36999,1689149897362; forceNewPlan=false, retain=false 2023-07-12 08:18:30,461 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 08:18:30,461 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:30,461 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149910461"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149910461"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149910461"}]},"ts":"1689149910461"} 2023-07-12 08:18:30,464 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:30,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:30,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2768af643112dbe301204e6f49c2d810, NAME => 'Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:30,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:30,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,624 INFO [StoreOpener-2768af643112dbe301204e6f49c2d810-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,625 DEBUG [StoreOpener-2768af643112dbe301204e6f49c2d810-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/f 2023-07-12 08:18:30,625 DEBUG [StoreOpener-2768af643112dbe301204e6f49c2d810-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/f 2023-07-12 08:18:30,626 INFO [StoreOpener-2768af643112dbe301204e6f49c2d810-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2768af643112dbe301204e6f49c2d810 columnFamilyName f 2023-07-12 08:18:30,626 INFO [StoreOpener-2768af643112dbe301204e6f49c2d810-1] regionserver.HStore(310): Store=2768af643112dbe301204e6f49c2d810/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:30,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,630 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 08:18:30,632 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:30,633 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2768af643112dbe301204e6f49c2d810; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11633735680, jitterRate=0.08347606658935547}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:30,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2768af643112dbe301204e6f49c2d810: 2023-07-12 08:18:30,634 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810., pid=89, masterSystemTime=1689149910616 2023-07-12 08:18:30,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:30,636 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:30,637 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:30,637 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149910637"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149910637"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149910637"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149910637"}]},"ts":"1689149910637"} 2023-07-12 08:18:30,640 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-12 08:18:30,641 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,36999,1689149897362 in 175 msec 2023-07-12 08:18:30,642 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, REOPEN/MOVE in 508 msec 2023-07-12 08:18:31,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-12 08:18:31,135 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-12 08:18:31,135 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:31,140 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:31,140 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:31,144 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-12 08:18:31,144 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:31,145 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-12 08:18:31,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:31,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:58548 deadline: 1689151111145, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-12 08:18:31,147 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:41817] to rsgroup default 2023-07-12 08:18:31,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:31,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:58548 deadline: 1689151111147, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-12 08:18:31,150 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-12 08:18:31,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:31,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 08:18:31,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:31,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:31,164 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-12 08:18:31,164 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region 2768af643112dbe301204e6f49c2d810 to RSGroup default 2023-07-12 08:18:31,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, REOPEN/MOVE 2023-07-12 08:18:31,166 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 08:18:31,167 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, REOPEN/MOVE 2023-07-12 08:18:31,171 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:31,171 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149911171"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149911171"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149911171"}]},"ts":"1689149911171"} 2023-07-12 08:18:31,173 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:31,327 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:31,328 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2768af643112dbe301204e6f49c2d810, disabling compactions & flushes 2023-07-12 08:18:31,328 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:31,328 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:31,328 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. after waiting 0 ms 2023-07-12 08:18:31,328 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:31,343 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 08:18:31,346 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:31,346 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2768af643112dbe301204e6f49c2d810: 2023-07-12 08:18:31,346 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2768af643112dbe301204e6f49c2d810 move to jenkins-hbase4.apache.org,42347,1689149897465 record at close sequenceid=5 2023-07-12 08:18:31,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:31,348 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=CLOSED 2023-07-12 08:18:31,348 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149911348"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149911348"}]},"ts":"1689149911348"} 2023-07-12 08:18:31,354 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-12 08:18:31,355 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,36999,1689149897362 in 178 msec 2023-07-12 08:18:31,355 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42347,1689149897465; forceNewPlan=false, retain=false 2023-07-12 08:18:31,506 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:31,506 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149911506"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149911506"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149911506"}]},"ts":"1689149911506"} 2023-07-12 08:18:31,508 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:31,664 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:31,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2768af643112dbe301204e6f49c2d810, NAME => 'Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:31,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:31,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:31,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:31,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:31,667 INFO [StoreOpener-2768af643112dbe301204e6f49c2d810-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:31,669 DEBUG [StoreOpener-2768af643112dbe301204e6f49c2d810-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/f 2023-07-12 08:18:31,669 DEBUG [StoreOpener-2768af643112dbe301204e6f49c2d810-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/f 2023-07-12 08:18:31,669 INFO [StoreOpener-2768af643112dbe301204e6f49c2d810-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2768af643112dbe301204e6f49c2d810 columnFamilyName f 2023-07-12 08:18:31,670 INFO [StoreOpener-2768af643112dbe301204e6f49c2d810-1] regionserver.HStore(310): Store=2768af643112dbe301204e6f49c2d810/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:31,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:31,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:31,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:31,678 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2768af643112dbe301204e6f49c2d810; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10722427520, jitterRate=-0.0013961195945739746}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:31,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2768af643112dbe301204e6f49c2d810: 2023-07-12 08:18:31,679 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810., pid=92, masterSystemTime=1689149911660 2023-07-12 08:18:31,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:31,685 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:31,685 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:31,686 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149911685"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149911685"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149911685"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149911685"}]},"ts":"1689149911685"} 2023-07-12 08:18:31,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-12 08:18:31,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,42347,1689149897465 in 183 msec 2023-07-12 08:18:31,695 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, REOPEN/MOVE in 529 msec 2023-07-12 08:18:32,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-12 08:18:32,168 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-12 08:18:32,169 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:32,173 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,173 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,176 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-12 08:18:32,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:32,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 295 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:58548 deadline: 1689151112176, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-12 08:18:32,178 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:41817] to rsgroup default 2023-07-12 08:18:32,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:32,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 08:18:32,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:32,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:32,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-12 08:18:32,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362, jenkins-hbase4.apache.org,38647,1689149897534, jenkins-hbase4.apache.org,41817,1689149901106] are moved back to bar 2023-07-12 08:18:32,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-12 08:18:32,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:32,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,190 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,193 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-12 08:18:32,195 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41817] ipc.CallRunner(144): callId: 223 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:60866 deadline: 1689149972194, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42347 startCode=1689149897465. As of locationSeqNum=10. 2023-07-12 08:18:32,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:32,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:32,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 08:18:32,307 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:32,311 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,311 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,313 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,313 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,315 INFO [Listener at localhost/44853] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-12 08:18:32,315 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-12 08:18:32,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-12 08:18:32,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-12 08:18:32,323 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149912323"}]},"ts":"1689149912323"} 2023-07-12 08:18:32,325 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-12 08:18:32,327 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-12 08:18:32,327 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, UNASSIGN}] 2023-07-12 08:18:32,329 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, UNASSIGN 2023-07-12 08:18:32,330 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:32,330 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149912330"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149912330"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149912330"}]},"ts":"1689149912330"} 2023-07-12 08:18:32,331 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE; CloseRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:32,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-12 08:18:32,483 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:32,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2768af643112dbe301204e6f49c2d810, disabling compactions & flushes 2023-07-12 08:18:32,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:32,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:32,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. after waiting 0 ms 2023-07-12 08:18:32,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:32,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 08:18:32,491 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810. 2023-07-12 08:18:32,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2768af643112dbe301204e6f49c2d810: 2023-07-12 08:18:32,492 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:32,493 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=2768af643112dbe301204e6f49c2d810, regionState=CLOSED 2023-07-12 08:18:32,493 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689149912493"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149912493"}]},"ts":"1689149912493"} 2023-07-12 08:18:32,498 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-12 08:18:32,498 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; CloseRegionProcedure 2768af643112dbe301204e6f49c2d810, server=jenkins-hbase4.apache.org,42347,1689149897465 in 165 msec 2023-07-12 08:18:32,507 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-12 08:18:32,507 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=2768af643112dbe301204e6f49c2d810, UNASSIGN in 171 msec 2023-07-12 08:18:32,508 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149912507"}]},"ts":"1689149912507"} 2023-07-12 08:18:32,515 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-12 08:18:32,518 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-12 08:18:32,521 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 204 msec 2023-07-12 08:18:32,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-12 08:18:32,622 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-12 08:18:32,623 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-12 08:18:32,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 08:18:32,627 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 08:18:32,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-12 08:18:32,630 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=96, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 08:18:32,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:32,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:32,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:32,647 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:32,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-12 08:18:32,655 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/recovered.edits] 2023-07-12 08:18:32,665 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/recovered.edits/10.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810/recovered.edits/10.seqid 2023-07-12 08:18:32,666 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testFailRemoveGroup/2768af643112dbe301204e6f49c2d810 2023-07-12 08:18:32,666 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 08:18:32,674 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=96, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 08:18:32,687 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-12 08:18:32,705 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-12 08:18:32,707 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=96, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 08:18:32,707 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-12 08:18:32,707 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149912707"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:32,710 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 08:18:32,710 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 2768af643112dbe301204e6f49c2d810, NAME => 'Group_testFailRemoveGroup,,1689149909497.2768af643112dbe301204e6f49c2d810.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 08:18:32,710 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-12 08:18:32,711 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689149912710"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:32,712 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-12 08:18:32,715 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=96, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 08:18:32,717 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 92 msec 2023-07-12 08:18:32,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-12 08:18:32,755 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-12 08:18:32,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:32,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:32,761 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:32,762 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:32,762 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:32,763 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:32,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:32,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:32,770 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:32,774 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:32,775 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:32,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:32,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:32,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:32,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:32,807 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,807 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,810 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:32,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:32,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 343 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151112810, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:32,811 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:32,813 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:32,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,814 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:32,815 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:32,815 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:32,841 INFO [Listener at localhost/44853] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=504 (was 496) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1801598716_17 at /127.0.0.1:51814 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1801598716_17 at /127.0.0.1:50278 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2c378da6-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1801598716_17 at /127.0.0.1:50248 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_972211395_17 at /127.0.0.1:51830 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1801598716_17 at /127.0.0.1:50292 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=778 (was 776) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=557 (was 528) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 174) - ProcessCount LEAK? -, AvailableMemoryMB=3704 (was 4056) 2023-07-12 08:18:32,842 WARN [Listener at localhost/44853] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-12 08:18:32,867 INFO [Listener at localhost/44853] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=503, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=557, ProcessCount=176, AvailableMemoryMB=3701 2023-07-12 08:18:32,867 WARN [Listener at localhost/44853] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 08:18:32,867 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-12 08:18:32,873 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,873 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,874 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:32,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:32,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:32,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:32,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:32,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:32,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:32,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:32,883 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:32,889 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:32,890 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:32,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:32,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:32,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:32,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:32,903 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:32,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:32,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 371 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151112906, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:32,907 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:32,927 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:32,928 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,929 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,929 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:32,930 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:32,930 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:32,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:32,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:32,937 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_847554951 2023-07-12 08:18:32,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_847554951 2023-07-12 08:18:32,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:32,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:32,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:32,952 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:32,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,959 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36999] to rsgroup Group_testMultiTableMove_847554951 2023-07-12 08:18:32,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:32,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_847554951 2023-07-12 08:18:32,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:32,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:32,966 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 08:18:32,966 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362] are moved back to default 2023-07-12 08:18:32,966 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_847554951 2023-07-12 08:18:32,966 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:32,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:32,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:32,973 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_847554951 2023-07-12 08:18:32,973 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:32,976 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:32,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 08:18:32,979 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:32,979 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 97 2023-07-12 08:18:32,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 08:18:32,981 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:32,981 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_847554951 2023-07-12 08:18:32,982 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:32,982 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:32,985 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:32,987 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:32,987 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7 empty. 2023-07-12 08:18:32,988 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:32,988 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 08:18:33,013 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:33,015 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 98e5aad0be40c618669a9c1d8cb7e5e7, NAME => 'GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:33,040 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:33,041 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 98e5aad0be40c618669a9c1d8cb7e5e7, disabling compactions & flushes 2023-07-12 08:18:33,041 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:33,041 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:33,041 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. after waiting 0 ms 2023-07-12 08:18:33,041 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:33,041 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:33,041 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 98e5aad0be40c618669a9c1d8cb7e5e7: 2023-07-12 08:18:33,044 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:33,045 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149913045"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149913045"}]},"ts":"1689149913045"} 2023-07-12 08:18:33,049 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:33,050 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:33,051 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149913050"}]},"ts":"1689149913050"} 2023-07-12 08:18:33,056 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-12 08:18:33,061 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:33,061 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:33,061 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:33,061 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:33,061 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:33,061 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98e5aad0be40c618669a9c1d8cb7e5e7, ASSIGN}] 2023-07-12 08:18:33,063 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98e5aad0be40c618669a9c1d8cb7e5e7, ASSIGN 2023-07-12 08:18:33,067 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98e5aad0be40c618669a9c1d8cb7e5e7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38647,1689149897534; forceNewPlan=false, retain=false 2023-07-12 08:18:33,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 08:18:33,217 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 08:18:33,218 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=98e5aad0be40c618669a9c1d8cb7e5e7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:33,219 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149913218"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149913218"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149913218"}]},"ts":"1689149913218"} 2023-07-12 08:18:33,220 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 98e5aad0be40c618669a9c1d8cb7e5e7, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:33,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 08:18:33,377 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:33,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 98e5aad0be40c618669a9c1d8cb7e5e7, NAME => 'GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:33,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:33,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:33,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:33,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:33,379 INFO [StoreOpener-98e5aad0be40c618669a9c1d8cb7e5e7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:33,380 DEBUG [StoreOpener-98e5aad0be40c618669a9c1d8cb7e5e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7/f 2023-07-12 08:18:33,380 DEBUG [StoreOpener-98e5aad0be40c618669a9c1d8cb7e5e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7/f 2023-07-12 08:18:33,381 INFO [StoreOpener-98e5aad0be40c618669a9c1d8cb7e5e7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 98e5aad0be40c618669a9c1d8cb7e5e7 columnFamilyName f 2023-07-12 08:18:33,382 INFO [StoreOpener-98e5aad0be40c618669a9c1d8cb7e5e7-1] regionserver.HStore(310): Store=98e5aad0be40c618669a9c1d8cb7e5e7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:33,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:33,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:33,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:33,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:33,388 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 98e5aad0be40c618669a9c1d8cb7e5e7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10560515840, jitterRate=-0.016475319862365723}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:33,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 98e5aad0be40c618669a9c1d8cb7e5e7: 2023-07-12 08:18:33,389 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7., pid=99, masterSystemTime=1689149913372 2023-07-12 08:18:33,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:33,391 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:33,392 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=98e5aad0be40c618669a9c1d8cb7e5e7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:33,392 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149913391"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149913391"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149913391"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149913391"}]},"ts":"1689149913391"} 2023-07-12 08:18:33,395 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-12 08:18:33,395 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 98e5aad0be40c618669a9c1d8cb7e5e7, server=jenkins-hbase4.apache.org,38647,1689149897534 in 173 msec 2023-07-12 08:18:33,396 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-12 08:18:33,396 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98e5aad0be40c618669a9c1d8cb7e5e7, ASSIGN in 334 msec 2023-07-12 08:18:33,397 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:33,397 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149913397"}]},"ts":"1689149913397"} 2023-07-12 08:18:33,398 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-12 08:18:33,400 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:33,402 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 424 msec 2023-07-12 08:18:33,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 08:18:33,584 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-12 08:18:33,584 DEBUG [Listener at localhost/44853] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-12 08:18:33,584 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:33,588 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-12 08:18:33,588 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:33,588 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-12 08:18:33,590 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:33,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 08:18:33,593 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:33,593 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 100 2023-07-12 08:18:33,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 08:18:33,595 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:33,596 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_847554951 2023-07-12 08:18:33,596 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:33,596 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:33,602 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:33,603 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:33,604 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2 empty. 2023-07-12 08:18:33,604 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:33,604 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 08:18:33,619 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:33,621 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => e605d9003f56f37aad0ec584c1a3dcb2, NAME => 'GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:33,644 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:33,644 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing e605d9003f56f37aad0ec584c1a3dcb2, disabling compactions & flushes 2023-07-12 08:18:33,644 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:33,644 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:33,644 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. after waiting 0 ms 2023-07-12 08:18:33,644 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:33,644 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:33,644 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for e605d9003f56f37aad0ec584c1a3dcb2: 2023-07-12 08:18:33,647 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:33,648 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149913648"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149913648"}]},"ts":"1689149913648"} 2023-07-12 08:18:33,650 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:33,651 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:33,651 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149913651"}]},"ts":"1689149913651"} 2023-07-12 08:18:33,652 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-12 08:18:33,655 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:33,656 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:33,656 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:33,656 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:33,656 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:33,656 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e605d9003f56f37aad0ec584c1a3dcb2, ASSIGN}] 2023-07-12 08:18:33,659 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e605d9003f56f37aad0ec584c1a3dcb2, ASSIGN 2023-07-12 08:18:33,660 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e605d9003f56f37aad0ec584c1a3dcb2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42347,1689149897465; forceNewPlan=false, retain=false 2023-07-12 08:18:33,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 08:18:33,811 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 08:18:33,812 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e605d9003f56f37aad0ec584c1a3dcb2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:33,813 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149913812"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149913812"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149913812"}]},"ts":"1689149913812"} 2023-07-12 08:18:33,816 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure e605d9003f56f37aad0ec584c1a3dcb2, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:33,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 08:18:33,974 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:33,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e605d9003f56f37aad0ec584c1a3dcb2, NAME => 'GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:33,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:33,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:33,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:33,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:33,977 INFO [StoreOpener-e605d9003f56f37aad0ec584c1a3dcb2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:33,981 DEBUG [StoreOpener-e605d9003f56f37aad0ec584c1a3dcb2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2/f 2023-07-12 08:18:33,981 DEBUG [StoreOpener-e605d9003f56f37aad0ec584c1a3dcb2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2/f 2023-07-12 08:18:33,981 INFO [StoreOpener-e605d9003f56f37aad0ec584c1a3dcb2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e605d9003f56f37aad0ec584c1a3dcb2 columnFamilyName f 2023-07-12 08:18:33,982 INFO [StoreOpener-e605d9003f56f37aad0ec584c1a3dcb2-1] regionserver.HStore(310): Store=e605d9003f56f37aad0ec584c1a3dcb2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:33,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:33,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:33,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:33,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:34,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e605d9003f56f37aad0ec584c1a3dcb2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11633752960, jitterRate=0.0834776759147644}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:34,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e605d9003f56f37aad0ec584c1a3dcb2: 2023-07-12 08:18:34,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2., pid=102, masterSystemTime=1689149913968 2023-07-12 08:18:34,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:34,003 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:34,004 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e605d9003f56f37aad0ec584c1a3dcb2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:34,004 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149914004"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149914004"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149914004"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149914004"}]},"ts":"1689149914004"} 2023-07-12 08:18:34,008 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-12 08:18:34,008 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure e605d9003f56f37aad0ec584c1a3dcb2, server=jenkins-hbase4.apache.org,42347,1689149897465 in 189 msec 2023-07-12 08:18:34,010 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-12 08:18:34,010 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e605d9003f56f37aad0ec584c1a3dcb2, ASSIGN in 352 msec 2023-07-12 08:18:34,011 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:34,011 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149914011"}]},"ts":"1689149914011"} 2023-07-12 08:18:34,013 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-12 08:18:34,021 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:34,022 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 431 msec 2023-07-12 08:18:34,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 08:18:34,199 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 100 completed 2023-07-12 08:18:34,200 DEBUG [Listener at localhost/44853] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-12 08:18:34,200 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:34,204 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-12 08:18:34,205 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:34,205 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-12 08:18:34,206 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:34,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 08:18:34,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:34,238 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 08:18:34,239 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:34,239 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_847554951 2023-07-12 08:18:34,247 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_847554951 2023-07-12 08:18:34,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:34,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_847554951 2023-07-12 08:18:34,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:34,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:34,253 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_847554951 2023-07-12 08:18:34,254 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region e605d9003f56f37aad0ec584c1a3dcb2 to RSGroup Group_testMultiTableMove_847554951 2023-07-12 08:18:34,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e605d9003f56f37aad0ec584c1a3dcb2, REOPEN/MOVE 2023-07-12 08:18:34,255 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_847554951 2023-07-12 08:18:34,256 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region 98e5aad0be40c618669a9c1d8cb7e5e7 to RSGroup Group_testMultiTableMove_847554951 2023-07-12 08:18:34,256 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e605d9003f56f37aad0ec584c1a3dcb2, REOPEN/MOVE 2023-07-12 08:18:34,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98e5aad0be40c618669a9c1d8cb7e5e7, REOPEN/MOVE 2023-07-12 08:18:34,257 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=e605d9003f56f37aad0ec584c1a3dcb2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:34,258 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_847554951, current retry=0 2023-07-12 08:18:34,258 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149914257"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149914257"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149914257"}]},"ts":"1689149914257"} 2023-07-12 08:18:34,261 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98e5aad0be40c618669a9c1d8cb7e5e7, REOPEN/MOVE 2023-07-12 08:18:34,263 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=98e5aad0be40c618669a9c1d8cb7e5e7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:34,263 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149914263"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149914263"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149914263"}]},"ts":"1689149914263"} 2023-07-12 08:18:34,264 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=103, state=RUNNABLE; CloseRegionProcedure e605d9003f56f37aad0ec584c1a3dcb2, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:34,266 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=104, state=RUNNABLE; CloseRegionProcedure 98e5aad0be40c618669a9c1d8cb7e5e7, server=jenkins-hbase4.apache.org,38647,1689149897534}] 2023-07-12 08:18:34,581 INFO [AsyncFSWAL-0-hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData-prefix:jenkins-hbase4.apache.org,44301,1689149895428] wal.AbstractFSWAL(1141): Slow sync cost: 310 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41167,DS-210f6c6b-127c-4179-bc3c-20e846cc6403,DISK], DatanodeInfoWithStorage[127.0.0.1:46329,DS-191ac456-d2fd-44f6-9c8c-3853198c2ad3,DISK], DatanodeInfoWithStorage[127.0.0.1:32775,DS-9f8a10de-1694-43d5-8c9b-f8e7b9bd282b,DISK]] 2023-07-12 08:18:34,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:34,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e605d9003f56f37aad0ec584c1a3dcb2, disabling compactions & flushes 2023-07-12 08:18:34,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:34,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:34,583 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. after waiting 0 ms 2023-07-12 08:18:34,583 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:34,590 WARN [PacketResponder: BP-1887393900-172.31.14.131-1689149891542:blk_1073741829_1005, type=LAST_IN_PIPELINE] datanode.BlockReceiver$PacketResponder(1636): Slow PacketResponder send ack to upstream took 310ms (threshold=300ms), PacketResponder: BP-1887393900-172.31.14.131-1689149891542:blk_1073741829_1005, type=LAST_IN_PIPELINE, replyAck=seqno: 707 reply: SUCCESS downstreamAckTimeNanos: 0 flag: 0 2023-07-12 08:18:34,590 WARN [PacketResponder: BP-1887393900-172.31.14.131-1689149891542:blk_1073741829_1005, type=LAST_IN_PIPELINE] datanode.BlockReceiver$PacketResponder(1636): Slow PacketResponder send ack to upstream took 310ms (threshold=300ms), PacketResponder: BP-1887393900-172.31.14.131-1689149891542:blk_1073741829_1005, type=LAST_IN_PIPELINE, replyAck=seqno: 707 reply: SUCCESS downstreamAckTimeNanos: 0 flag: 0 2023-07-12 08:18:34,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:34,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:34,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e605d9003f56f37aad0ec584c1a3dcb2: 2023-07-12 08:18:34,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e605d9003f56f37aad0ec584c1a3dcb2 move to jenkins-hbase4.apache.org,36999,1689149897362 record at close sequenceid=2 2023-07-12 08:18:34,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:34,600 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=e605d9003f56f37aad0ec584c1a3dcb2, regionState=CLOSED 2023-07-12 08:18:34,600 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149914600"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149914600"}]},"ts":"1689149914600"} 2023-07-12 08:18:34,603 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=103 2023-07-12 08:18:34,604 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=103, state=SUCCESS; CloseRegionProcedure e605d9003f56f37aad0ec584c1a3dcb2, server=jenkins-hbase4.apache.org,42347,1689149897465 in 337 msec 2023-07-12 08:18:34,604 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e605d9003f56f37aad0ec584c1a3dcb2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36999,1689149897362; forceNewPlan=false, retain=false 2023-07-12 08:18:34,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:34,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 98e5aad0be40c618669a9c1d8cb7e5e7, disabling compactions & flushes 2023-07-12 08:18:34,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:34,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:34,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. after waiting 0 ms 2023-07-12 08:18:34,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:34,755 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=e605d9003f56f37aad0ec584c1a3dcb2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:34,755 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149914755"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149914755"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149914755"}]},"ts":"1689149914755"} 2023-07-12 08:18:34,757 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=103, state=RUNNABLE; OpenRegionProcedure e605d9003f56f37aad0ec584c1a3dcb2, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:34,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:34,776 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:34,776 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 98e5aad0be40c618669a9c1d8cb7e5e7: 2023-07-12 08:18:34,776 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 98e5aad0be40c618669a9c1d8cb7e5e7 move to jenkins-hbase4.apache.org,36999,1689149897362 record at close sequenceid=2 2023-07-12 08:18:34,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:34,779 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=98e5aad0be40c618669a9c1d8cb7e5e7, regionState=CLOSED 2023-07-12 08:18:34,780 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149914779"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149914779"}]},"ts":"1689149914779"} 2023-07-12 08:18:34,783 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=104 2023-07-12 08:18:34,784 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=104, state=SUCCESS; CloseRegionProcedure 98e5aad0be40c618669a9c1d8cb7e5e7, server=jenkins-hbase4.apache.org,38647,1689149897534 in 515 msec 2023-07-12 08:18:34,785 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98e5aad0be40c618669a9c1d8cb7e5e7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36999,1689149897362; forceNewPlan=false, retain=false 2023-07-12 08:18:34,916 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:34,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e605d9003f56f37aad0ec584c1a3dcb2, NAME => 'GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:34,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:34,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:34,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:34,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:34,931 INFO [StoreOpener-e605d9003f56f37aad0ec584c1a3dcb2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:34,939 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=98e5aad0be40c618669a9c1d8cb7e5e7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:34,939 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149914939"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149914939"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149914939"}]},"ts":"1689149914939"} 2023-07-12 08:18:34,941 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=104, state=RUNNABLE; OpenRegionProcedure 98e5aad0be40c618669a9c1d8cb7e5e7, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:34,945 DEBUG [StoreOpener-e605d9003f56f37aad0ec584c1a3dcb2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2/f 2023-07-12 08:18:34,945 DEBUG [StoreOpener-e605d9003f56f37aad0ec584c1a3dcb2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2/f 2023-07-12 08:18:34,945 INFO [StoreOpener-e605d9003f56f37aad0ec584c1a3dcb2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e605d9003f56f37aad0ec584c1a3dcb2 columnFamilyName f 2023-07-12 08:18:34,947 INFO [StoreOpener-e605d9003f56f37aad0ec584c1a3dcb2-1] regionserver.HStore(310): Store=e605d9003f56f37aad0ec584c1a3dcb2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:34,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:34,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:34,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:34,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e605d9003f56f37aad0ec584c1a3dcb2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9758112160, jitterRate=-0.09120498597621918}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:34,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e605d9003f56f37aad0ec584c1a3dcb2: 2023-07-12 08:18:34,958 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2., pid=107, masterSystemTime=1689149914909 2023-07-12 08:18:34,960 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:34,960 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:34,960 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=e605d9003f56f37aad0ec584c1a3dcb2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:34,961 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149914960"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149914960"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149914960"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149914960"}]},"ts":"1689149914960"} 2023-07-12 08:18:34,965 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=103 2023-07-12 08:18:34,965 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=103, state=SUCCESS; OpenRegionProcedure e605d9003f56f37aad0ec584c1a3dcb2, server=jenkins-hbase4.apache.org,36999,1689149897362 in 205 msec 2023-07-12 08:18:34,966 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e605d9003f56f37aad0ec584c1a3dcb2, REOPEN/MOVE in 711 msec 2023-07-12 08:18:35,103 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:35,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 98e5aad0be40c618669a9c1d8cb7e5e7, NAME => 'GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:35,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:35,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:35,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:35,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:35,106 INFO [StoreOpener-98e5aad0be40c618669a9c1d8cb7e5e7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:35,107 DEBUG [StoreOpener-98e5aad0be40c618669a9c1d8cb7e5e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7/f 2023-07-12 08:18:35,107 DEBUG [StoreOpener-98e5aad0be40c618669a9c1d8cb7e5e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7/f 2023-07-12 08:18:35,108 INFO [StoreOpener-98e5aad0be40c618669a9c1d8cb7e5e7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 98e5aad0be40c618669a9c1d8cb7e5e7 columnFamilyName f 2023-07-12 08:18:35,108 INFO [StoreOpener-98e5aad0be40c618669a9c1d8cb7e5e7-1] regionserver.HStore(310): Store=98e5aad0be40c618669a9c1d8cb7e5e7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:35,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:35,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:35,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:35,120 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 98e5aad0be40c618669a9c1d8cb7e5e7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11342384960, jitterRate=0.05634191632270813}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:35,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 98e5aad0be40c618669a9c1d8cb7e5e7: 2023-07-12 08:18:35,121 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7., pid=108, masterSystemTime=1689149915099 2023-07-12 08:18:35,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:35,124 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:35,125 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=98e5aad0be40c618669a9c1d8cb7e5e7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:35,125 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149915125"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149915125"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149915125"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149915125"}]},"ts":"1689149915125"} 2023-07-12 08:18:35,133 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=104 2023-07-12 08:18:35,133 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=104, state=SUCCESS; OpenRegionProcedure 98e5aad0be40c618669a9c1d8cb7e5e7, server=jenkins-hbase4.apache.org,36999,1689149897362 in 190 msec 2023-07-12 08:18:35,135 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98e5aad0be40c618669a9c1d8cb7e5e7, REOPEN/MOVE in 877 msec 2023-07-12 08:18:35,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure.ProcedureSyncWait(216): waitFor pid=103 2023-07-12 08:18:35,258 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_847554951. 2023-07-12 08:18:35,258 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:35,282 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:35,282 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:35,285 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 08:18:35,285 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:35,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 08:18:35,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:35,288 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:35,288 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:35,290 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_847554951 2023-07-12 08:18:35,290 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:35,292 INFO [Listener at localhost/44853] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-12 08:18:35,293 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-12 08:18:35,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 08:18:35,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-12 08:18:35,308 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149915308"}]},"ts":"1689149915308"} 2023-07-12 08:18:35,311 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-12 08:18:35,313 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-12 08:18:35,322 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98e5aad0be40c618669a9c1d8cb7e5e7, UNASSIGN}] 2023-07-12 08:18:35,325 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98e5aad0be40c618669a9c1d8cb7e5e7, UNASSIGN 2023-07-12 08:18:35,325 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=98e5aad0be40c618669a9c1d8cb7e5e7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:35,326 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149915325"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149915325"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149915325"}]},"ts":"1689149915325"} 2023-07-12 08:18:35,337 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE; CloseRegionProcedure 98e5aad0be40c618669a9c1d8cb7e5e7, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:35,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-12 08:18:35,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:35,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 98e5aad0be40c618669a9c1d8cb7e5e7, disabling compactions & flushes 2023-07-12 08:18:35,500 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:35,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:35,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. after waiting 0 ms 2023-07-12 08:18:35,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:35,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 08:18:35,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7. 2023-07-12 08:18:35,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 98e5aad0be40c618669a9c1d8cb7e5e7: 2023-07-12 08:18:35,511 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:35,511 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=98e5aad0be40c618669a9c1d8cb7e5e7, regionState=CLOSED 2023-07-12 08:18:35,512 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149915511"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149915511"}]},"ts":"1689149915511"} 2023-07-12 08:18:35,518 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-12 08:18:35,518 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; CloseRegionProcedure 98e5aad0be40c618669a9c1d8cb7e5e7, server=jenkins-hbase4.apache.org,36999,1689149897362 in 179 msec 2023-07-12 08:18:35,520 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-12 08:18:35,520 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98e5aad0be40c618669a9c1d8cb7e5e7, UNASSIGN in 196 msec 2023-07-12 08:18:35,521 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149915520"}]},"ts":"1689149915520"} 2023-07-12 08:18:35,522 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-12 08:18:35,524 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-12 08:18:35,536 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 240 msec 2023-07-12 08:18:35,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-12 08:18:35,600 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-12 08:18:35,601 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-12 08:18:35,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 08:18:35,605 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 08:18:35,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_847554951' 2023-07-12 08:18:35,607 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=112, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 08:18:35,612 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:35,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:35,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_847554951 2023-07-12 08:18:35,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:35,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:35,624 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7/recovered.edits] 2023-07-12 08:18:35,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-12 08:18:35,631 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7/recovered.edits/7.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7/recovered.edits/7.seqid 2023-07-12 08:18:35,632 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveA/98e5aad0be40c618669a9c1d8cb7e5e7 2023-07-12 08:18:35,632 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 08:18:35,636 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=112, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 08:18:35,638 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-12 08:18:35,643 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-12 08:18:35,645 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=112, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 08:18:35,645 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-12 08:18:35,645 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149915645"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:35,647 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 08:18:35,647 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 98e5aad0be40c618669a9c1d8cb7e5e7, NAME => 'GrouptestMultiTableMoveA,,1689149912975.98e5aad0be40c618669a9c1d8cb7e5e7.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 08:18:35,647 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-12 08:18:35,647 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689149915647"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:35,649 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-12 08:18:35,652 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=112, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 08:18:35,653 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 51 msec 2023-07-12 08:18:35,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-12 08:18:35,728 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-12 08:18:35,729 INFO [Listener at localhost/44853] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-12 08:18:35,729 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-12 08:18:35,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 08:18:35,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 08:18:35,733 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149915733"}]},"ts":"1689149915733"} 2023-07-12 08:18:35,736 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-12 08:18:35,738 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-12 08:18:35,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e605d9003f56f37aad0ec584c1a3dcb2, UNASSIGN}] 2023-07-12 08:18:35,751 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e605d9003f56f37aad0ec584c1a3dcb2, UNASSIGN 2023-07-12 08:18:35,751 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=e605d9003f56f37aad0ec584c1a3dcb2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:35,752 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149915751"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149915751"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149915751"}]},"ts":"1689149915751"} 2023-07-12 08:18:35,753 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure e605d9003f56f37aad0ec584c1a3dcb2, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:35,805 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 08:18:35,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 08:18:35,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:35,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e605d9003f56f37aad0ec584c1a3dcb2, disabling compactions & flushes 2023-07-12 08:18:35,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:35,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:35,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. after waiting 0 ms 2023-07-12 08:18:35,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:35,914 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 08:18:35,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2. 2023-07-12 08:18:35,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e605d9003f56f37aad0ec584c1a3dcb2: 2023-07-12 08:18:35,919 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:35,919 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=e605d9003f56f37aad0ec584c1a3dcb2, regionState=CLOSED 2023-07-12 08:18:35,919 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689149915919"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149915919"}]},"ts":"1689149915919"} 2023-07-12 08:18:35,928 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-12 08:18:35,928 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure e605d9003f56f37aad0ec584c1a3dcb2, server=jenkins-hbase4.apache.org,36999,1689149897362 in 168 msec 2023-07-12 08:18:35,930 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-12 08:18:35,930 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e605d9003f56f37aad0ec584c1a3dcb2, UNASSIGN in 189 msec 2023-07-12 08:18:35,931 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149915930"}]},"ts":"1689149915930"} 2023-07-12 08:18:35,932 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-12 08:18:35,934 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-12 08:18:35,935 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 205 msec 2023-07-12 08:18:36,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 08:18:36,036 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-12 08:18:36,036 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-12 08:18:36,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 08:18:36,039 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 08:18:36,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_847554951' 2023-07-12 08:18:36,040 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 08:18:36,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_847554951 2023-07-12 08:18:36,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:36,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-12 08:18:36,048 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:36,050 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2/recovered.edits] 2023-07-12 08:18:36,054 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2/recovered.edits/7.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2/recovered.edits/7.seqid 2023-07-12 08:18:36,055 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/GrouptestMultiTableMoveB/e605d9003f56f37aad0ec584c1a3dcb2 2023-07-12 08:18:36,055 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 08:18:36,057 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 08:18:36,059 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-12 08:18:36,061 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-12 08:18:36,063 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 08:18:36,063 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-12 08:18:36,063 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149916063"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:36,067 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 08:18:36,067 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e605d9003f56f37aad0ec584c1a3dcb2, NAME => 'GrouptestMultiTableMoveB,,1689149913589.e605d9003f56f37aad0ec584c1a3dcb2.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 08:18:36,067 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-12 08:18:36,067 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689149916067"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:36,072 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-12 08:18:36,083 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 08:18:36,084 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 47 msec 2023-07-12 08:18:36,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-12 08:18:36,148 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-12 08:18:36,152 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,152 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,154 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:36,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:36,154 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:36,155 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36999] to rsgroup default 2023-07-12 08:18:36,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_847554951 2023-07-12 08:18:36,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:36,161 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_847554951, current retry=0 2023-07-12 08:18:36,161 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362] are moved back to Group_testMultiTableMove_847554951 2023-07-12 08:18:36,161 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_847554951 => default 2023-07-12 08:18:36,162 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:36,162 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_847554951 2023-07-12 08:18:36,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 08:18:36,175 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:36,176 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:36,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:36,176 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:36,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:36,178 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:36,179 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:36,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:36,186 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:36,191 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:36,192 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:36,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:36,200 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:36,205 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,205 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,215 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:36,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:36,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 509 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151116215, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:36,228 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:36,230 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:36,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,232 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:36,233 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:36,233 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,259 INFO [Listener at localhost/44853] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=500 (was 503), OpenFileDescriptor=759 (was 776), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=577 (was 557) - SystemLoadAverage LEAK? -, ProcessCount=174 (was 176), AvailableMemoryMB=3461 (was 3701) 2023-07-12 08:18:36,300 INFO [Listener at localhost/44853] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=500, OpenFileDescriptor=759, MaxFileDescriptor=60000, SystemLoadAverage=577, ProcessCount=174, AvailableMemoryMB=3461 2023-07-12 08:18:36,300 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-12 08:18:36,304 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,304 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,305 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:36,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:36,305 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:36,306 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:36,306 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:36,307 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:36,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:36,312 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:36,315 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:36,315 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:36,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:36,321 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:36,324 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,324 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,326 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:36,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:36,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 537 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151116326, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:36,327 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:36,329 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:36,329 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,329 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,330 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:36,330 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:36,330 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,331 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:36,331 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,332 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-12 08:18:36,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 08:18:36,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:36,354 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:36,358 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,358 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,361 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:36999] to rsgroup oldGroup 2023-07-12 08:18:36,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 08:18:36,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:36,366 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 08:18:36,366 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362, jenkins-hbase4.apache.org,38647,1689149897534] are moved back to default 2023-07-12 08:18:36,366 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-12 08:18:36,366 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:36,369 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,369 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,372 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 08:18:36,372 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,372 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 08:18:36,373 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,373 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:36,373 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,374 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-12 08:18:36,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 08:18:36,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 08:18:36,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 08:18:36,380 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:36,384 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,384 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,386 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41817] to rsgroup anotherRSGroup 2023-07-12 08:18:36,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 08:18:36,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 08:18:36,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 08:18:36,393 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 08:18:36,393 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41817,1689149901106] are moved back to default 2023-07-12 08:18:36,393 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-12 08:18:36,393 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:36,395 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,395 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,397 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 08:18:36,398 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,398 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 08:18:36,398 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-12 08:18:36,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:36,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:58548 deadline: 1689151116402, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-12 08:18:36,405 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-12 08:18:36,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:36,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:58548 deadline: 1689151116405, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-12 08:18:36,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-12 08:18:36,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:36,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:58548 deadline: 1689151116406, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-12 08:18:36,407 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-12 08:18:36,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:36,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:58548 deadline: 1689151116407, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-12 08:18:36,410 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,410 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,411 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:36,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:36,411 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:36,412 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41817] to rsgroup default 2023-07-12 08:18:36,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 08:18:36,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 08:18:36,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 08:18:36,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-12 08:18:36,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41817,1689149901106] are moved back to anotherRSGroup 2023-07-12 08:18:36,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-12 08:18:36,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:36,418 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-12 08:18:36,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 08:18:36,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 08:18:36,423 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:36,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:36,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:36,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:36,425 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:36999] to rsgroup default 2023-07-12 08:18:36,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 08:18:36,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:36,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-12 08:18:36,433 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362, jenkins-hbase4.apache.org,38647,1689149897534] are moved back to oldGroup 2023-07-12 08:18:36,433 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-12 08:18:36,433 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:36,433 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-12 08:18:36,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 08:18:36,439 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:36,440 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:36,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:36,440 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:36,441 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:36,441 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:36,442 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:36,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:36,447 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:36,449 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:36,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:36,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:36,457 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:36,461 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,461 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,462 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:36,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:36,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 613 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151116462, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:36,463 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:36,465 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:36,466 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,466 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,466 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:36,467 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:36,467 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,487 INFO [Listener at localhost/44853] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=504 (was 500) Potentially hanging thread: hconnection-0x62be270e-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=759 (was 759), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=577 (was 577), ProcessCount=174 (was 174), AvailableMemoryMB=3459 (was 3461) 2023-07-12 08:18:36,487 WARN [Listener at localhost/44853] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-12 08:18:36,506 INFO [Listener at localhost/44853] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=504, OpenFileDescriptor=759, MaxFileDescriptor=60000, SystemLoadAverage=577, ProcessCount=174, AvailableMemoryMB=3459 2023-07-12 08:18:36,506 WARN [Listener at localhost/44853] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-12 08:18:36,506 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-12 08:18:36,510 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,511 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,511 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:36,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:36,512 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:36,512 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:36,512 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:36,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:36,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:36,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:36,521 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:36,521 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:36,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:36,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:36,530 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,530 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,532 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:36,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:36,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 641 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151116532, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:36,532 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:36,534 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:36,535 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,535 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,535 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:36,536 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:36,536 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,536 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:36,536 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-12 08:18:36,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 08:18:36,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:36,542 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:36,545 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,545 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,547 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:36999] to rsgroup oldgroup 2023-07-12 08:18:36,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 08:18:36,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:36,555 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 08:18:36,555 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362, jenkins-hbase4.apache.org,38647,1689149897534] are moved back to default 2023-07-12 08:18:36,555 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-12 08:18:36,555 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:36,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:36,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:36,559 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 08:18:36,559 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:36,560 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:36,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-12 08:18:36,563 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:36,563 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 117 2023-07-12 08:18:36,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 08:18:36,565 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 08:18:36,565 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:36,565 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:36,566 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:36,568 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:36,569 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/testRename/f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:36,570 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/testRename/f34890516978f0b2fa47b027a21eccfa empty. 2023-07-12 08:18:36,570 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/testRename/f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:36,570 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-12 08:18:36,584 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:36,586 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => f34890516978f0b2fa47b027a21eccfa, NAME => 'testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:36,600 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:36,600 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing f34890516978f0b2fa47b027a21eccfa, disabling compactions & flushes 2023-07-12 08:18:36,600 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:36,600 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:36,600 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. after waiting 0 ms 2023-07-12 08:18:36,600 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:36,600 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:36,600 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for f34890516978f0b2fa47b027a21eccfa: 2023-07-12 08:18:36,603 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:36,604 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689149916604"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149916604"}]},"ts":"1689149916604"} 2023-07-12 08:18:36,611 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:36,611 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:36,612 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149916611"}]},"ts":"1689149916611"} 2023-07-12 08:18:36,613 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-12 08:18:36,616 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:36,616 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:36,616 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:36,616 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:36,616 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, ASSIGN}] 2023-07-12 08:18:36,618 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, ASSIGN 2023-07-12 08:18:36,619 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41817,1689149901106; forceNewPlan=false, retain=false 2023-07-12 08:18:36,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 08:18:36,769 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 08:18:36,770 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=f34890516978f0b2fa47b027a21eccfa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:36,771 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689149916770"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149916770"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149916770"}]},"ts":"1689149916770"} 2023-07-12 08:18:36,772 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure f34890516978f0b2fa47b027a21eccfa, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:36,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 08:18:36,927 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:36,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f34890516978f0b2fa47b027a21eccfa, NAME => 'testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:36,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:36,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:36,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:36,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:36,929 INFO [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:36,931 DEBUG [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa/tr 2023-07-12 08:18:36,931 DEBUG [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa/tr 2023-07-12 08:18:36,931 INFO [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f34890516978f0b2fa47b027a21eccfa columnFamilyName tr 2023-07-12 08:18:36,932 INFO [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] regionserver.HStore(310): Store=f34890516978f0b2fa47b027a21eccfa/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:36,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:36,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:36,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:36,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:36,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f34890516978f0b2fa47b027a21eccfa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11106460000, jitterRate=0.03436969220638275}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:36,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f34890516978f0b2fa47b027a21eccfa: 2023-07-12 08:18:36,939 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa., pid=119, masterSystemTime=1689149916924 2023-07-12 08:18:36,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:36,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:36,940 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=f34890516978f0b2fa47b027a21eccfa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:36,941 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689149916940"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149916940"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149916940"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149916940"}]},"ts":"1689149916940"} 2023-07-12 08:18:36,943 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-12 08:18:36,943 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure f34890516978f0b2fa47b027a21eccfa, server=jenkins-hbase4.apache.org,41817,1689149901106 in 170 msec 2023-07-12 08:18:36,945 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-12 08:18:36,945 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, ASSIGN in 327 msec 2023-07-12 08:18:36,945 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:36,945 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149916945"}]},"ts":"1689149916945"} 2023-07-12 08:18:36,946 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-12 08:18:36,948 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:36,950 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=testRename in 388 msec 2023-07-12 08:18:37,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 08:18:37,168 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 117 completed 2023-07-12 08:18:37,168 DEBUG [Listener at localhost/44853] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-12 08:18:37,168 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:37,171 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-12 08:18:37,171 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:37,171 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-12 08:18:37,173 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-12 08:18:37,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 08:18:37,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:37,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:37,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:37,179 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-12 08:18:37,179 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region f34890516978f0b2fa47b027a21eccfa to RSGroup oldgroup 2023-07-12 08:18:37,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:37,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:37,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:37,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:37,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:37,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, REOPEN/MOVE 2023-07-12 08:18:37,180 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-12 08:18:37,180 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, REOPEN/MOVE 2023-07-12 08:18:37,180 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f34890516978f0b2fa47b027a21eccfa, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:37,180 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689149917180"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149917180"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149917180"}]},"ts":"1689149917180"} 2023-07-12 08:18:37,182 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure f34890516978f0b2fa47b027a21eccfa, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:37,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:37,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f34890516978f0b2fa47b027a21eccfa, disabling compactions & flushes 2023-07-12 08:18:37,336 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:37,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:37,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. after waiting 0 ms 2023-07-12 08:18:37,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:37,340 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:37,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:37,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f34890516978f0b2fa47b027a21eccfa: 2023-07-12 08:18:37,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f34890516978f0b2fa47b027a21eccfa move to jenkins-hbase4.apache.org,36999,1689149897362 record at close sequenceid=2 2023-07-12 08:18:37,342 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:37,343 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f34890516978f0b2fa47b027a21eccfa, regionState=CLOSED 2023-07-12 08:18:37,343 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689149917342"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149917342"}]},"ts":"1689149917342"} 2023-07-12 08:18:37,347 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-12 08:18:37,347 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure f34890516978f0b2fa47b027a21eccfa, server=jenkins-hbase4.apache.org,41817,1689149901106 in 162 msec 2023-07-12 08:18:37,347 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36999,1689149897362; forceNewPlan=false, retain=false 2023-07-12 08:18:37,497 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 08:18:37,498 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f34890516978f0b2fa47b027a21eccfa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:37,498 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689149917498"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149917498"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149917498"}]},"ts":"1689149917498"} 2023-07-12 08:18:37,500 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure f34890516978f0b2fa47b027a21eccfa, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:37,656 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:37,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f34890516978f0b2fa47b027a21eccfa, NAME => 'testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:37,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:37,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:37,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:37,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:37,658 INFO [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:37,661 DEBUG [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa/tr 2023-07-12 08:18:37,661 DEBUG [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa/tr 2023-07-12 08:18:37,661 INFO [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f34890516978f0b2fa47b027a21eccfa columnFamilyName tr 2023-07-12 08:18:37,662 INFO [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] regionserver.HStore(310): Store=f34890516978f0b2fa47b027a21eccfa/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:37,663 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:37,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:37,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:37,670 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f34890516978f0b2fa47b027a21eccfa; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10115037760, jitterRate=-0.05796369910240173}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:37,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f34890516978f0b2fa47b027a21eccfa: 2023-07-12 08:18:37,670 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa., pid=122, masterSystemTime=1689149917652 2023-07-12 08:18:37,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:37,672 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:37,672 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f34890516978f0b2fa47b027a21eccfa, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:37,672 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689149917672"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149917672"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149917672"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149917672"}]},"ts":"1689149917672"} 2023-07-12 08:18:37,675 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-12 08:18:37,675 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure f34890516978f0b2fa47b027a21eccfa, server=jenkins-hbase4.apache.org,36999,1689149897362 in 174 msec 2023-07-12 08:18:37,677 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, REOPEN/MOVE in 496 msec 2023-07-12 08:18:38,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-12 08:18:38,180 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-12 08:18:38,180 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:38,184 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:38,184 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:38,186 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:38,187 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-12 08:18:38,187 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:38,188 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 08:18:38,188 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:38,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-12 08:18:38,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:38,190 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:38,190 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:38,191 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-12 08:18:38,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 08:18:38,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 08:18:38,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:38,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:38,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 08:18:38,197 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:38,200 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:38,200 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:38,202 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41817] to rsgroup normal 2023-07-12 08:18:38,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 08:18:38,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 08:18:38,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:38,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:38,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 08:18:38,208 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 08:18:38,208 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41817,1689149901106] are moved back to default 2023-07-12 08:18:38,208 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-12 08:18:38,208 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:38,211 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:38,211 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:38,213 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-12 08:18:38,213 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:38,215 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:38,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-12 08:18:38,218 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:38,218 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 123 2023-07-12 08:18:38,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 08:18:38,220 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 08:18:38,220 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 08:18:38,221 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:38,221 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:38,222 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 08:18:38,232 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:38,233 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:38,234 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b empty. 2023-07-12 08:18:38,234 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:38,234 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-12 08:18:38,251 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:38,253 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2de6d40274685ae9edc330d242c58d7b, NAME => 'unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:38,267 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:38,268 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 2de6d40274685ae9edc330d242c58d7b, disabling compactions & flushes 2023-07-12 08:18:38,268 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:38,268 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:38,268 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. after waiting 0 ms 2023-07-12 08:18:38,268 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:38,268 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:38,268 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 2de6d40274685ae9edc330d242c58d7b: 2023-07-12 08:18:38,270 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:38,271 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689149918271"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149918271"}]},"ts":"1689149918271"} 2023-07-12 08:18:38,272 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:38,273 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:38,273 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149918273"}]},"ts":"1689149918273"} 2023-07-12 08:18:38,274 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-12 08:18:38,278 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, ASSIGN}] 2023-07-12 08:18:38,280 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, ASSIGN 2023-07-12 08:18:38,281 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42347,1689149897465; forceNewPlan=false, retain=false 2023-07-12 08:18:38,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 08:18:38,432 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=2de6d40274685ae9edc330d242c58d7b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:38,433 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689149918432"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149918432"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149918432"}]},"ts":"1689149918432"} 2023-07-12 08:18:38,434 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE; OpenRegionProcedure 2de6d40274685ae9edc330d242c58d7b, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:38,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 08:18:38,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:38,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2de6d40274685ae9edc330d242c58d7b, NAME => 'unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:38,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:38,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:38,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:38,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:38,592 INFO [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:38,594 DEBUG [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b/ut 2023-07-12 08:18:38,594 DEBUG [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b/ut 2023-07-12 08:18:38,595 INFO [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2de6d40274685ae9edc330d242c58d7b columnFamilyName ut 2023-07-12 08:18:38,595 INFO [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] regionserver.HStore(310): Store=2de6d40274685ae9edc330d242c58d7b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:38,596 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:38,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:38,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:38,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:38,602 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2de6d40274685ae9edc330d242c58d7b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9758799200, jitterRate=-0.0911410003900528}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:38,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2de6d40274685ae9edc330d242c58d7b: 2023-07-12 08:18:38,603 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b., pid=125, masterSystemTime=1689149918586 2023-07-12 08:18:38,604 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:38,604 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:38,605 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=2de6d40274685ae9edc330d242c58d7b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:38,605 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689149918605"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149918605"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149918605"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149918605"}]},"ts":"1689149918605"} 2023-07-12 08:18:38,609 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-12 08:18:38,609 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; OpenRegionProcedure 2de6d40274685ae9edc330d242c58d7b, server=jenkins-hbase4.apache.org,42347,1689149897465 in 173 msec 2023-07-12 08:18:38,610 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-12 08:18:38,610 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, ASSIGN in 331 msec 2023-07-12 08:18:38,611 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:38,611 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149918611"}]},"ts":"1689149918611"} 2023-07-12 08:18:38,612 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-12 08:18:38,615 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:38,616 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=unmovedTable in 400 msec 2023-07-12 08:18:38,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 08:18:38,822 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 123 completed 2023-07-12 08:18:38,822 DEBUG [Listener at localhost/44853] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-12 08:18:38,823 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:38,825 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-12 08:18:38,826 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:38,826 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-12 08:18:38,827 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-12 08:18:38,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 08:18:38,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 08:18:38,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:38,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:38,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 08:18:38,832 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-12 08:18:38,832 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region 2de6d40274685ae9edc330d242c58d7b to RSGroup normal 2023-07-12 08:18:38,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, REOPEN/MOVE 2023-07-12 08:18:38,833 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-12 08:18:38,833 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, REOPEN/MOVE 2023-07-12 08:18:38,834 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2de6d40274685ae9edc330d242c58d7b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:38,834 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689149918834"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149918834"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149918834"}]},"ts":"1689149918834"} 2023-07-12 08:18:38,835 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 2de6d40274685ae9edc330d242c58d7b, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:38,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:38,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2de6d40274685ae9edc330d242c58d7b, disabling compactions & flushes 2023-07-12 08:18:38,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:38,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:38,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. after waiting 0 ms 2023-07-12 08:18:38,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:38,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:38,994 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:38,994 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2de6d40274685ae9edc330d242c58d7b: 2023-07-12 08:18:38,994 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2de6d40274685ae9edc330d242c58d7b move to jenkins-hbase4.apache.org,41817,1689149901106 record at close sequenceid=2 2023-07-12 08:18:38,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:38,996 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2de6d40274685ae9edc330d242c58d7b, regionState=CLOSED 2023-07-12 08:18:38,996 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689149918996"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149918996"}]},"ts":"1689149918996"} 2023-07-12 08:18:38,998 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-12 08:18:38,998 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 2de6d40274685ae9edc330d242c58d7b, server=jenkins-hbase4.apache.org,42347,1689149897465 in 162 msec 2023-07-12 08:18:38,999 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41817,1689149901106; forceNewPlan=false, retain=false 2023-07-12 08:18:39,149 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2de6d40274685ae9edc330d242c58d7b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:39,149 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689149919149"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149919149"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149919149"}]},"ts":"1689149919149"} 2023-07-12 08:18:39,151 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 2de6d40274685ae9edc330d242c58d7b, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:39,311 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:39,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2de6d40274685ae9edc330d242c58d7b, NAME => 'unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:39,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:39,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:39,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:39,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:39,320 INFO [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:39,321 DEBUG [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b/ut 2023-07-12 08:18:39,321 DEBUG [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b/ut 2023-07-12 08:18:39,322 INFO [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2de6d40274685ae9edc330d242c58d7b columnFamilyName ut 2023-07-12 08:18:39,322 INFO [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] regionserver.HStore(310): Store=2de6d40274685ae9edc330d242c58d7b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:39,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:39,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:39,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:39,330 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2de6d40274685ae9edc330d242c58d7b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11964601600, jitterRate=0.11429035663604736}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:39,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2de6d40274685ae9edc330d242c58d7b: 2023-07-12 08:18:39,331 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b., pid=128, masterSystemTime=1689149919307 2023-07-12 08:18:39,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:39,333 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:39,333 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2de6d40274685ae9edc330d242c58d7b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:39,333 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689149919333"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149919333"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149919333"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149919333"}]},"ts":"1689149919333"} 2023-07-12 08:18:39,336 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-12 08:18:39,336 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 2de6d40274685ae9edc330d242c58d7b, server=jenkins-hbase4.apache.org,41817,1689149901106 in 183 msec 2023-07-12 08:18:39,338 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, REOPEN/MOVE in 504 msec 2023-07-12 08:18:39,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-12 08:18:39,833 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-12 08:18:39,833 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:39,837 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:39,837 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:39,840 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:39,840 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 08:18:39,840 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:39,841 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-12 08:18:39,841 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:39,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 08:18:39,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:39,843 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-12 08:18:39,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 08:18:39,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:39,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:39,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 08:18:39,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-12 08:18:39,849 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-12 08:18:39,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:39,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:39,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-12 08:18:39,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:39,855 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-12 08:18:39,855 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:39,856 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 08:18:39,856 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:39,859 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:39,859 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:39,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-12 08:18:39,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 08:18:39,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:39,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:39,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 08:18:39,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 08:18:39,867 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-12 08:18:39,867 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region 2de6d40274685ae9edc330d242c58d7b to RSGroup default 2023-07-12 08:18:39,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, REOPEN/MOVE 2023-07-12 08:18:39,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 08:18:39,868 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, REOPEN/MOVE 2023-07-12 08:18:39,869 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2de6d40274685ae9edc330d242c58d7b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:39,869 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689149919869"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149919869"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149919869"}]},"ts":"1689149919869"} 2023-07-12 08:18:39,870 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 2de6d40274685ae9edc330d242c58d7b, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:40,023 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:40,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2de6d40274685ae9edc330d242c58d7b, disabling compactions & flushes 2023-07-12 08:18:40,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:40,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:40,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. after waiting 0 ms 2023-07-12 08:18:40,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:40,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 08:18:40,029 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:40,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2de6d40274685ae9edc330d242c58d7b: 2023-07-12 08:18:40,029 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2de6d40274685ae9edc330d242c58d7b move to jenkins-hbase4.apache.org,42347,1689149897465 record at close sequenceid=5 2023-07-12 08:18:40,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:40,031 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2de6d40274685ae9edc330d242c58d7b, regionState=CLOSED 2023-07-12 08:18:40,031 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689149920031"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149920031"}]},"ts":"1689149920031"} 2023-07-12 08:18:40,033 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-12 08:18:40,033 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 2de6d40274685ae9edc330d242c58d7b, server=jenkins-hbase4.apache.org,41817,1689149901106 in 162 msec 2023-07-12 08:18:40,034 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42347,1689149897465; forceNewPlan=false, retain=false 2023-07-12 08:18:40,184 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2de6d40274685ae9edc330d242c58d7b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:40,184 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689149920184"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149920184"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149920184"}]},"ts":"1689149920184"} 2023-07-12 08:18:40,186 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 2de6d40274685ae9edc330d242c58d7b, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:40,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:40,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2de6d40274685ae9edc330d242c58d7b, NAME => 'unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:40,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:40,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:40,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:40,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:40,361 INFO [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:40,362 DEBUG [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b/ut 2023-07-12 08:18:40,362 DEBUG [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b/ut 2023-07-12 08:18:40,363 INFO [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2de6d40274685ae9edc330d242c58d7b columnFamilyName ut 2023-07-12 08:18:40,363 INFO [StoreOpener-2de6d40274685ae9edc330d242c58d7b-1] regionserver.HStore(310): Store=2de6d40274685ae9edc330d242c58d7b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:40,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:40,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:40,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:40,369 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2de6d40274685ae9edc330d242c58d7b; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10925388160, jitterRate=0.017506062984466553}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:40,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2de6d40274685ae9edc330d242c58d7b: 2023-07-12 08:18:40,370 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b., pid=131, masterSystemTime=1689149920338 2023-07-12 08:18:40,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:40,371 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:40,372 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2de6d40274685ae9edc330d242c58d7b, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:40,372 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689149920372"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149920372"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149920372"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149920372"}]},"ts":"1689149920372"} 2023-07-12 08:18:40,375 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-12 08:18:40,375 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 2de6d40274685ae9edc330d242c58d7b, server=jenkins-hbase4.apache.org,42347,1689149897465 in 187 msec 2023-07-12 08:18:40,376 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=2de6d40274685ae9edc330d242c58d7b, REOPEN/MOVE in 508 msec 2023-07-12 08:18:40,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-12 08:18:40,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-12 08:18:40,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:40,870 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41817] to rsgroup default 2023-07-12 08:18:40,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 08:18:40,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:40,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:40,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 08:18:40,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 08:18:40,878 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-12 08:18:40,878 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41817,1689149901106] are moved back to normal 2023-07-12 08:18:40,878 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-12 08:18:40,878 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:40,879 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-12 08:18:40,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:40,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:40,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 08:18:40,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 08:18:40,884 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:40,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:40,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:40,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:40,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:40,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:40,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:40,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:40,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 08:18:40,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 08:18:40,891 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:40,893 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-12 08:18:40,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:40,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 08:18:40,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:40,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-12 08:18:40,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(345): Moving region f34890516978f0b2fa47b027a21eccfa to RSGroup default 2023-07-12 08:18:40,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, REOPEN/MOVE 2023-07-12 08:18:40,897 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 08:18:40,897 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, REOPEN/MOVE 2023-07-12 08:18:40,897 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=f34890516978f0b2fa47b027a21eccfa, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:40,898 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689149920897"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149920897"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149920897"}]},"ts":"1689149920897"} 2023-07-12 08:18:40,899 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure f34890516978f0b2fa47b027a21eccfa, server=jenkins-hbase4.apache.org,36999,1689149897362}] 2023-07-12 08:18:41,010 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 08:18:41,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:41,053 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f34890516978f0b2fa47b027a21eccfa, disabling compactions & flushes 2023-07-12 08:18:41,053 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:41,053 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:41,053 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. after waiting 0 ms 2023-07-12 08:18:41,053 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:41,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 08:18:41,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:41,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f34890516978f0b2fa47b027a21eccfa: 2023-07-12 08:18:41,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f34890516978f0b2fa47b027a21eccfa move to jenkins-hbase4.apache.org,41817,1689149901106 record at close sequenceid=5 2023-07-12 08:18:41,065 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:41,067 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=f34890516978f0b2fa47b027a21eccfa, regionState=CLOSED 2023-07-12 08:18:41,067 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689149921067"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149921067"}]},"ts":"1689149921067"} 2023-07-12 08:18:41,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-12 08:18:41,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure f34890516978f0b2fa47b027a21eccfa, server=jenkins-hbase4.apache.org,36999,1689149897362 in 170 msec 2023-07-12 08:18:41,073 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41817,1689149901106; forceNewPlan=false, retain=false 2023-07-12 08:18:41,223 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 08:18:41,224 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=f34890516978f0b2fa47b027a21eccfa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:41,224 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689149921224"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149921224"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149921224"}]},"ts":"1689149921224"} 2023-07-12 08:18:41,226 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure f34890516978f0b2fa47b027a21eccfa, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:41,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:41,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f34890516978f0b2fa47b027a21eccfa, NAME => 'testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:41,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:41,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:41,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:41,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:41,387 INFO [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:41,388 DEBUG [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa/tr 2023-07-12 08:18:41,388 DEBUG [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa/tr 2023-07-12 08:18:41,388 INFO [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f34890516978f0b2fa47b027a21eccfa columnFamilyName tr 2023-07-12 08:18:41,389 INFO [StoreOpener-f34890516978f0b2fa47b027a21eccfa-1] regionserver.HStore(310): Store=f34890516978f0b2fa47b027a21eccfa/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:41,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:41,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:41,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:41,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f34890516978f0b2fa47b027a21eccfa; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11905116480, jitterRate=0.1087503731250763}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:41,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f34890516978f0b2fa47b027a21eccfa: 2023-07-12 08:18:41,397 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa., pid=134, masterSystemTime=1689149921380 2023-07-12 08:18:41,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:41,398 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:41,399 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=f34890516978f0b2fa47b027a21eccfa, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:41,399 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689149921399"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149921399"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149921399"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149921399"}]},"ts":"1689149921399"} 2023-07-12 08:18:41,404 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-12 08:18:41,404 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure f34890516978f0b2fa47b027a21eccfa, server=jenkins-hbase4.apache.org,41817,1689149901106 in 175 msec 2023-07-12 08:18:41,407 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f34890516978f0b2fa47b027a21eccfa, REOPEN/MOVE in 508 msec 2023-07-12 08:18:41,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-12 08:18:41,897 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-12 08:18:41,897 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:41,898 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:36999] to rsgroup default 2023-07-12 08:18:41,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:41,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 08:18:41,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:41,902 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-12 08:18:41,902 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362, jenkins-hbase4.apache.org,38647,1689149897534] are moved back to newgroup 2023-07-12 08:18:41,902 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-12 08:18:41,902 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:41,903 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-12 08:18:41,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:41,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:41,907 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:41,910 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:41,911 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:41,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:41,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:41,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:41,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:41,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:41,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:41,924 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:41,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:41,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 761 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151121924, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:41,924 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:41,926 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:41,926 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:41,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:41,927 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:41,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:41,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:41,944 INFO [Listener at localhost/44853] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=498 (was 504), OpenFileDescriptor=748 (was 759), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=563 (was 577), ProcessCount=174 (was 174), AvailableMemoryMB=3286 (was 3459) 2023-07-12 08:18:41,960 INFO [Listener at localhost/44853] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=498, OpenFileDescriptor=748, MaxFileDescriptor=60000, SystemLoadAverage=563, ProcessCount=174, AvailableMemoryMB=3286 2023-07-12 08:18:41,960 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-12 08:18:41,964 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:41,964 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:41,965 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:41,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:41,965 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:41,966 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:41,966 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:41,966 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:41,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:41,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:41,972 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:41,974 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:41,975 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:41,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:41,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:41,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:41,980 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:41,984 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:41,984 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:41,992 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:41,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:41,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 789 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151121992, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:41,993 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:41,995 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:41,996 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:41,996 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:41,997 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:41,998 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:41,998 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:41,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-12 08:18:41,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:42,007 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-12 08:18:42,007 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-12 08:18:42,008 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-12 08:18:42,008 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:42,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-12 08:18:42,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:42,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 801 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:58548 deadline: 1689151122009, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-12 08:18:42,011 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-12 08:18:42,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:42,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 804 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:58548 deadline: 1689151122011, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 08:18:42,013 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-12 08:18:42,014 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-12 08:18:42,018 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-12 08:18:42,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:42,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 808 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:58548 deadline: 1689151122017, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 08:18:42,022 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:42,022 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:42,023 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:42,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:42,023 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:42,024 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:42,024 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:42,025 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:42,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:42,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:42,030 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:42,032 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:42,033 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:42,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:42,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:42,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:42,039 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:42,041 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:42,041 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:42,043 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:42,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:42,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 832 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151122043, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:42,046 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:42,047 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:42,048 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:42,048 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:42,048 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:42,049 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:42,049 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:42,069 INFO [Listener at localhost/44853] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=502 (was 498) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x35609bcb-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=748 (was 748), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=563 (was 563), ProcessCount=174 (was 174), AvailableMemoryMB=3287 (was 3286) - AvailableMemoryMB LEAK? - 2023-07-12 08:18:42,069 WARN [Listener at localhost/44853] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-12 08:18:42,085 INFO [Listener at localhost/44853] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=502, OpenFileDescriptor=748, MaxFileDescriptor=60000, SystemLoadAverage=563, ProcessCount=174, AvailableMemoryMB=3286 2023-07-12 08:18:42,085 WARN [Listener at localhost/44853] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-12 08:18:42,085 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-12 08:18:42,089 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:42,089 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:42,090 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:42,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:42,090 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:42,091 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:42,091 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:42,091 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:42,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:42,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:42,096 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:42,099 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:42,118 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:42,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:42,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:42,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:42,125 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:42,142 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:42,142 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:42,144 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:42,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:42,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 860 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151122144, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:42,145 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:42,146 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:42,147 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:42,147 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:42,147 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:42,148 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:42,148 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:42,149 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:42,149 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:42,150 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_890564596 2023-07-12 08:18:42,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:42,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:42,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_890564596 2023-07-12 08:18:42,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:42,161 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:42,169 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:42,170 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:42,172 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:36999] to rsgroup Group_testDisabledTableMove_890564596 2023-07-12 08:18:42,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:42,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_890564596 2023-07-12 08:18:42,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:42,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:42,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 08:18:42,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362, jenkins-hbase4.apache.org,38647,1689149897534] are moved back to default 2023-07-12 08:18:42,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_890564596 2023-07-12 08:18:42,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:42,179 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:42,179 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:42,181 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_890564596 2023-07-12 08:18:42,181 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:42,183 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:42,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-12 08:18:42,187 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:42,187 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 135 2023-07-12 08:18:42,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 08:18:42,189 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:42,189 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_890564596 2023-07-12 08:18:42,190 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:42,190 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:42,192 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:42,197 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399 2023-07-12 08:18:42,197 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:42,197 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:42,197 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:42,197 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:42,198 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399 empty. 2023-07-12 08:18:42,198 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5 empty. 2023-07-12 08:18:42,198 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958 empty. 2023-07-12 08:18:42,198 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44 empty. 2023-07-12 08:18:42,198 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008 empty. 2023-07-12 08:18:42,199 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:42,199 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:42,199 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399 2023-07-12 08:18:42,199 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:42,199 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:42,199 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 08:18:42,221 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:42,222 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 07c5451afa126586aa359b48cece9399, NAME => 'Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:42,223 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => dbf07a818de943134f8b97cf484f8008, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:42,223 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 6dc2dd2ff8c53d54627c4c615020efd5, NAME => 'Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:42,268 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:42,268 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 6dc2dd2ff8c53d54627c4c615020efd5, disabling compactions & flushes 2023-07-12 08:18:42,268 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. 2023-07-12 08:18:42,268 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. 2023-07-12 08:18:42,268 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. after waiting 0 ms 2023-07-12 08:18:42,268 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. 2023-07-12 08:18:42,268 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. 2023-07-12 08:18:42,268 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 6dc2dd2ff8c53d54627c4c615020efd5: 2023-07-12 08:18:42,269 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 86ddc0ece211348fcb15dbd8ccf76958, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:42,271 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:42,271 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing dbf07a818de943134f8b97cf484f8008, disabling compactions & flushes 2023-07-12 08:18:42,271 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. 2023-07-12 08:18:42,271 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. 2023-07-12 08:18:42,271 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. after waiting 0 ms 2023-07-12 08:18:42,271 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. 2023-07-12 08:18:42,271 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. 2023-07-12 08:18:42,271 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for dbf07a818de943134f8b97cf484f8008: 2023-07-12 08:18:42,272 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => ee537927d1e2ca3b51d7c8f601951d44, NAME => 'Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp 2023-07-12 08:18:42,273 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:42,273 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 07c5451afa126586aa359b48cece9399, disabling compactions & flushes 2023-07-12 08:18:42,273 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. 2023-07-12 08:18:42,273 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. 2023-07-12 08:18:42,273 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. after waiting 0 ms 2023-07-12 08:18:42,273 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. 2023-07-12 08:18:42,273 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. 2023-07-12 08:18:42,273 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 07c5451afa126586aa359b48cece9399: 2023-07-12 08:18:42,285 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:42,285 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 86ddc0ece211348fcb15dbd8ccf76958, disabling compactions & flushes 2023-07-12 08:18:42,285 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. 2023-07-12 08:18:42,285 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. 2023-07-12 08:18:42,285 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. after waiting 0 ms 2023-07-12 08:18:42,285 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. 2023-07-12 08:18:42,285 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. 2023-07-12 08:18:42,285 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 86ddc0ece211348fcb15dbd8ccf76958: 2023-07-12 08:18:42,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 08:18:42,290 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:42,291 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing ee537927d1e2ca3b51d7c8f601951d44, disabling compactions & flushes 2023-07-12 08:18:42,291 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. 2023-07-12 08:18:42,291 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. 2023-07-12 08:18:42,291 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. after waiting 0 ms 2023-07-12 08:18:42,291 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. 2023-07-12 08:18:42,291 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. 2023-07-12 08:18:42,291 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for ee537927d1e2ca3b51d7c8f601951d44: 2023-07-12 08:18:42,293 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:42,294 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922294"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149922294"}]},"ts":"1689149922294"} 2023-07-12 08:18:42,295 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922294"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149922294"}]},"ts":"1689149922294"} 2023-07-12 08:18:42,295 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689149922294"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149922294"}]},"ts":"1689149922294"} 2023-07-12 08:18:42,295 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922294"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149922294"}]},"ts":"1689149922294"} 2023-07-12 08:18:42,295 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689149922294"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149922294"}]},"ts":"1689149922294"} 2023-07-12 08:18:42,297 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 08:18:42,298 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:42,298 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149922298"}]},"ts":"1689149922298"} 2023-07-12 08:18:42,299 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-12 08:18:42,302 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:42,302 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:42,302 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:42,302 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:42,303 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=07c5451afa126586aa359b48cece9399, ASSIGN}, {pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc2dd2ff8c53d54627c4c615020efd5, ASSIGN}, {pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dbf07a818de943134f8b97cf484f8008, ASSIGN}, {pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=86ddc0ece211348fcb15dbd8ccf76958, ASSIGN}, {pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ee537927d1e2ca3b51d7c8f601951d44, ASSIGN}] 2023-07-12 08:18:42,307 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc2dd2ff8c53d54627c4c615020efd5, ASSIGN 2023-07-12 08:18:42,307 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dbf07a818de943134f8b97cf484f8008, ASSIGN 2023-07-12 08:18:42,307 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=86ddc0ece211348fcb15dbd8ccf76958, ASSIGN 2023-07-12 08:18:42,307 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=07c5451afa126586aa359b48cece9399, ASSIGN 2023-07-12 08:18:42,308 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc2dd2ff8c53d54627c4c615020efd5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41817,1689149901106; forceNewPlan=false, retain=false 2023-07-12 08:18:42,308 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dbf07a818de943134f8b97cf484f8008, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42347,1689149897465; forceNewPlan=false, retain=false 2023-07-12 08:18:42,308 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ee537927d1e2ca3b51d7c8f601951d44, ASSIGN 2023-07-12 08:18:42,308 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=86ddc0ece211348fcb15dbd8ccf76958, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42347,1689149897465; forceNewPlan=false, retain=false 2023-07-12 08:18:42,308 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=07c5451afa126586aa359b48cece9399, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41817,1689149901106; forceNewPlan=false, retain=false 2023-07-12 08:18:42,309 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ee537927d1e2ca3b51d7c8f601951d44, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41817,1689149901106; forceNewPlan=false, retain=false 2023-07-12 08:18:42,458 INFO [jenkins-hbase4:44301] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 08:18:42,462 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=86ddc0ece211348fcb15dbd8ccf76958, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:42,462 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=dbf07a818de943134f8b97cf484f8008, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:42,462 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=6dc2dd2ff8c53d54627c4c615020efd5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:42,462 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=ee537927d1e2ca3b51d7c8f601951d44, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:42,462 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922462"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149922462"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149922462"}]},"ts":"1689149922462"} 2023-07-12 08:18:42,462 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689149922462"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149922462"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149922462"}]},"ts":"1689149922462"} 2023-07-12 08:18:42,462 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922462"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149922462"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149922462"}]},"ts":"1689149922462"} 2023-07-12 08:18:42,463 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=07c5451afa126586aa359b48cece9399, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:42,462 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922462"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149922462"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149922462"}]},"ts":"1689149922462"} 2023-07-12 08:18:42,463 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689149922463"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149922463"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149922463"}]},"ts":"1689149922463"} 2023-07-12 08:18:42,464 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=137, state=RUNNABLE; OpenRegionProcedure 6dc2dd2ff8c53d54627c4c615020efd5, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:42,465 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=140, state=RUNNABLE; OpenRegionProcedure ee537927d1e2ca3b51d7c8f601951d44, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:42,465 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=138, state=RUNNABLE; OpenRegionProcedure dbf07a818de943134f8b97cf484f8008, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:42,467 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=139, state=RUNNABLE; OpenRegionProcedure 86ddc0ece211348fcb15dbd8ccf76958, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:42,471 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=136, state=RUNNABLE; OpenRegionProcedure 07c5451afa126586aa359b48cece9399, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:42,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 08:18:42,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. 2023-07-12 08:18:42,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6dc2dd2ff8c53d54627c4c615020efd5, NAME => 'Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 08:18:42,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:42,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:42,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:42,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:42,622 INFO [StoreOpener-6dc2dd2ff8c53d54627c4c615020efd5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:42,624 DEBUG [StoreOpener-6dc2dd2ff8c53d54627c4c615020efd5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5/f 2023-07-12 08:18:42,624 DEBUG [StoreOpener-6dc2dd2ff8c53d54627c4c615020efd5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5/f 2023-07-12 08:18:42,625 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. 2023-07-12 08:18:42,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 86ddc0ece211348fcb15dbd8ccf76958, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 08:18:42,625 INFO [StoreOpener-6dc2dd2ff8c53d54627c4c615020efd5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6dc2dd2ff8c53d54627c4c615020efd5 columnFamilyName f 2023-07-12 08:18:42,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:42,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:42,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:42,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:42,625 INFO [StoreOpener-6dc2dd2ff8c53d54627c4c615020efd5-1] regionserver.HStore(310): Store=6dc2dd2ff8c53d54627c4c615020efd5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:42,626 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:42,627 INFO [StoreOpener-86ddc0ece211348fcb15dbd8ccf76958-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:42,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:42,628 DEBUG [StoreOpener-86ddc0ece211348fcb15dbd8ccf76958-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958/f 2023-07-12 08:18:42,628 DEBUG [StoreOpener-86ddc0ece211348fcb15dbd8ccf76958-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958/f 2023-07-12 08:18:42,628 INFO [StoreOpener-86ddc0ece211348fcb15dbd8ccf76958-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 86ddc0ece211348fcb15dbd8ccf76958 columnFamilyName f 2023-07-12 08:18:42,629 INFO [StoreOpener-86ddc0ece211348fcb15dbd8ccf76958-1] regionserver.HStore(310): Store=86ddc0ece211348fcb15dbd8ccf76958/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:42,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:42,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:42,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:42,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:42,634 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6dc2dd2ff8c53d54627c4c615020efd5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9433399360, jitterRate=-0.12144622206687927}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:42,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6dc2dd2ff8c53d54627c4c615020efd5: 2023-07-12 08:18:42,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:42,635 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5., pid=141, masterSystemTime=1689149922616 2023-07-12 08:18:42,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. 2023-07-12 08:18:42,636 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. 2023-07-12 08:18:42,636 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. 2023-07-12 08:18:42,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:42,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 07c5451afa126586aa359b48cece9399, NAME => 'Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 08:18:42,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 07c5451afa126586aa359b48cece9399 2023-07-12 08:18:42,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:42,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 07c5451afa126586aa359b48cece9399 2023-07-12 08:18:42,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 07c5451afa126586aa359b48cece9399 2023-07-12 08:18:42,637 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 86ddc0ece211348fcb15dbd8ccf76958; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11129320160, jitterRate=0.03649871051311493}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:42,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 86ddc0ece211348fcb15dbd8ccf76958: 2023-07-12 08:18:42,637 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=6dc2dd2ff8c53d54627c4c615020efd5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:42,637 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922637"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149922637"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149922637"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149922637"}]},"ts":"1689149922637"} 2023-07-12 08:18:42,638 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958., pid=144, masterSystemTime=1689149922621 2023-07-12 08:18:42,638 INFO [StoreOpener-07c5451afa126586aa359b48cece9399-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 07c5451afa126586aa359b48cece9399 2023-07-12 08:18:42,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. 2023-07-12 08:18:42,639 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. 2023-07-12 08:18:42,640 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. 2023-07-12 08:18:42,640 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=86ddc0ece211348fcb15dbd8ccf76958, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:42,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dbf07a818de943134f8b97cf484f8008, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 08:18:42,640 DEBUG [StoreOpener-07c5451afa126586aa359b48cece9399-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399/f 2023-07-12 08:18:42,640 DEBUG [StoreOpener-07c5451afa126586aa359b48cece9399-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399/f 2023-07-12 08:18:42,640 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922640"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149922640"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149922640"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149922640"}]},"ts":"1689149922640"} 2023-07-12 08:18:42,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:42,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:42,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:42,640 INFO [StoreOpener-07c5451afa126586aa359b48cece9399-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 07c5451afa126586aa359b48cece9399 columnFamilyName f 2023-07-12 08:18:42,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:42,641 INFO [StoreOpener-07c5451afa126586aa359b48cece9399-1] regionserver.HStore(310): Store=07c5451afa126586aa359b48cece9399/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:42,641 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=137 2023-07-12 08:18:42,641 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=137, state=SUCCESS; OpenRegionProcedure 6dc2dd2ff8c53d54627c4c615020efd5, server=jenkins-hbase4.apache.org,41817,1689149901106 in 175 msec 2023-07-12 08:18:42,642 INFO [StoreOpener-dbf07a818de943134f8b97cf484f8008-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:42,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399 2023-07-12 08:18:42,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399 2023-07-12 08:18:42,642 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc2dd2ff8c53d54627c4c615020efd5, ASSIGN in 339 msec 2023-07-12 08:18:42,643 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=139 2023-07-12 08:18:42,643 DEBUG [StoreOpener-dbf07a818de943134f8b97cf484f8008-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008/f 2023-07-12 08:18:42,643 DEBUG [StoreOpener-dbf07a818de943134f8b97cf484f8008-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008/f 2023-07-12 08:18:42,643 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=139, state=SUCCESS; OpenRegionProcedure 86ddc0ece211348fcb15dbd8ccf76958, server=jenkins-hbase4.apache.org,42347,1689149897465 in 174 msec 2023-07-12 08:18:42,644 INFO [StoreOpener-dbf07a818de943134f8b97cf484f8008-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dbf07a818de943134f8b97cf484f8008 columnFamilyName f 2023-07-12 08:18:42,644 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=86ddc0ece211348fcb15dbd8ccf76958, ASSIGN in 341 msec 2023-07-12 08:18:42,644 INFO [StoreOpener-dbf07a818de943134f8b97cf484f8008-1] regionserver.HStore(310): Store=dbf07a818de943134f8b97cf484f8008/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:42,645 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:42,645 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:42,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 07c5451afa126586aa359b48cece9399 2023-07-12 08:18:42,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:42,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:42,649 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 07c5451afa126586aa359b48cece9399; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11079940800, jitterRate=0.03189989924430847}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:42,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 07c5451afa126586aa359b48cece9399: 2023-07-12 08:18:42,649 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399., pid=145, masterSystemTime=1689149922616 2023-07-12 08:18:42,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:42,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. 2023-07-12 08:18:42,651 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. 2023-07-12 08:18:42,651 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. 2023-07-12 08:18:42,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ee537927d1e2ca3b51d7c8f601951d44, NAME => 'Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 08:18:42,651 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=07c5451afa126586aa359b48cece9399, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:42,651 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dbf07a818de943134f8b97cf484f8008; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12001838720, jitterRate=0.11775833368301392}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:42,651 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689149922651"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149922651"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149922651"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149922651"}]},"ts":"1689149922651"} 2023-07-12 08:18:42,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dbf07a818de943134f8b97cf484f8008: 2023-07-12 08:18:42,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:42,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:42,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:42,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:42,652 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008., pid=143, masterSystemTime=1689149922621 2023-07-12 08:18:42,653 INFO [StoreOpener-ee537927d1e2ca3b51d7c8f601951d44-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:42,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. 2023-07-12 08:18:42,654 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. 2023-07-12 08:18:42,654 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=dbf07a818de943134f8b97cf484f8008, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:42,655 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922654"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149922654"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149922654"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149922654"}]},"ts":"1689149922654"} 2023-07-12 08:18:42,655 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=136 2023-07-12 08:18:42,655 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=136, state=SUCCESS; OpenRegionProcedure 07c5451afa126586aa359b48cece9399, server=jenkins-hbase4.apache.org,41817,1689149901106 in 184 msec 2023-07-12 08:18:42,656 DEBUG [StoreOpener-ee537927d1e2ca3b51d7c8f601951d44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44/f 2023-07-12 08:18:42,656 DEBUG [StoreOpener-ee537927d1e2ca3b51d7c8f601951d44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44/f 2023-07-12 08:18:42,656 INFO [StoreOpener-ee537927d1e2ca3b51d7c8f601951d44-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ee537927d1e2ca3b51d7c8f601951d44 columnFamilyName f 2023-07-12 08:18:42,657 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=07c5451afa126586aa359b48cece9399, ASSIGN in 353 msec 2023-07-12 08:18:42,657 INFO [StoreOpener-ee537927d1e2ca3b51d7c8f601951d44-1] regionserver.HStore(310): Store=ee537927d1e2ca3b51d7c8f601951d44/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:42,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:42,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:42,659 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=138 2023-07-12 08:18:42,659 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=138, state=SUCCESS; OpenRegionProcedure dbf07a818de943134f8b97cf484f8008, server=jenkins-hbase4.apache.org,42347,1689149897465 in 191 msec 2023-07-12 08:18:42,660 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dbf07a818de943134f8b97cf484f8008, ASSIGN in 357 msec 2023-07-12 08:18:42,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:42,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:42,665 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ee537927d1e2ca3b51d7c8f601951d44; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10458037760, jitterRate=-0.02601933479309082}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:42,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ee537927d1e2ca3b51d7c8f601951d44: 2023-07-12 08:18:42,665 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44., pid=142, masterSystemTime=1689149922616 2023-07-12 08:18:42,667 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. 2023-07-12 08:18:42,667 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. 2023-07-12 08:18:42,667 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=ee537927d1e2ca3b51d7c8f601951d44, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:42,667 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689149922667"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149922667"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149922667"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149922667"}]},"ts":"1689149922667"} 2023-07-12 08:18:42,669 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=140 2023-07-12 08:18:42,669 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=140, state=SUCCESS; OpenRegionProcedure ee537927d1e2ca3b51d7c8f601951d44, server=jenkins-hbase4.apache.org,41817,1689149901106 in 203 msec 2023-07-12 08:18:42,671 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=135 2023-07-12 08:18:42,671 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ee537927d1e2ca3b51d7c8f601951d44, ASSIGN in 367 msec 2023-07-12 08:18:42,671 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:42,671 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149922671"}]},"ts":"1689149922671"} 2023-07-12 08:18:42,673 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-12 08:18:42,675 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:42,676 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 492 msec 2023-07-12 08:18:42,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 08:18:42,792 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 135 completed 2023-07-12 08:18:42,792 DEBUG [Listener at localhost/44853] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-12 08:18:42,792 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:42,796 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-12 08:18:42,796 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:42,796 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-12 08:18:42,797 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:42,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 08:18:42,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:42,803 INFO [Listener at localhost/44853] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 08:18:42,803 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-12 08:18:42,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-12 08:18:42,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-12 08:18:42,807 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149922806"}]},"ts":"1689149922806"} 2023-07-12 08:18:42,808 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-12 08:18:42,809 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-12 08:18:42,810 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=07c5451afa126586aa359b48cece9399, UNASSIGN}, {pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc2dd2ff8c53d54627c4c615020efd5, UNASSIGN}, {pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dbf07a818de943134f8b97cf484f8008, UNASSIGN}, {pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=86ddc0ece211348fcb15dbd8ccf76958, UNASSIGN}, {pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ee537927d1e2ca3b51d7c8f601951d44, UNASSIGN}] 2023-07-12 08:18:42,810 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=86ddc0ece211348fcb15dbd8ccf76958, UNASSIGN 2023-07-12 08:18:42,811 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=07c5451afa126586aa359b48cece9399, UNASSIGN 2023-07-12 08:18:42,811 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dbf07a818de943134f8b97cf484f8008, UNASSIGN 2023-07-12 08:18:42,811 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc2dd2ff8c53d54627c4c615020efd5, UNASSIGN 2023-07-12 08:18:42,811 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ee537927d1e2ca3b51d7c8f601951d44, UNASSIGN 2023-07-12 08:18:42,811 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=86ddc0ece211348fcb15dbd8ccf76958, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:42,811 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=07c5451afa126586aa359b48cece9399, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:42,811 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922811"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149922811"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149922811"}]},"ts":"1689149922811"} 2023-07-12 08:18:42,811 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689149922811"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149922811"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149922811"}]},"ts":"1689149922811"} 2023-07-12 08:18:42,812 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=dbf07a818de943134f8b97cf484f8008, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:42,812 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922812"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149922812"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149922812"}]},"ts":"1689149922812"} 2023-07-12 08:18:42,812 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=6dc2dd2ff8c53d54627c4c615020efd5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:42,812 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922812"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149922812"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149922812"}]},"ts":"1689149922812"} 2023-07-12 08:18:42,812 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=ee537927d1e2ca3b51d7c8f601951d44, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:42,812 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689149922812"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149922812"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149922812"}]},"ts":"1689149922812"} 2023-07-12 08:18:42,813 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=150, state=RUNNABLE; CloseRegionProcedure 86ddc0ece211348fcb15dbd8ccf76958, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:42,813 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=147, state=RUNNABLE; CloseRegionProcedure 07c5451afa126586aa359b48cece9399, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:42,814 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=149, state=RUNNABLE; CloseRegionProcedure dbf07a818de943134f8b97cf484f8008, server=jenkins-hbase4.apache.org,42347,1689149897465}] 2023-07-12 08:18:42,815 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=148, state=RUNNABLE; CloseRegionProcedure 6dc2dd2ff8c53d54627c4c615020efd5, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:42,816 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=151, state=RUNNABLE; CloseRegionProcedure ee537927d1e2ca3b51d7c8f601951d44, server=jenkins-hbase4.apache.org,41817,1689149901106}] 2023-07-12 08:18:42,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-12 08:18:42,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:42,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 86ddc0ece211348fcb15dbd8ccf76958, disabling compactions & flushes 2023-07-12 08:18:42,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. 2023-07-12 08:18:42,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. 2023-07-12 08:18:42,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. after waiting 0 ms 2023-07-12 08:18:42,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. 2023-07-12 08:18:42,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:42,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6dc2dd2ff8c53d54627c4c615020efd5, disabling compactions & flushes 2023-07-12 08:18:42,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. 2023-07-12 08:18:42,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. 2023-07-12 08:18:42,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. after waiting 0 ms 2023-07-12 08:18:42,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. 2023-07-12 08:18:42,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:42,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:42,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958. 2023-07-12 08:18:42,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 86ddc0ece211348fcb15dbd8ccf76958: 2023-07-12 08:18:42,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5. 2023-07-12 08:18:42,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6dc2dd2ff8c53d54627c4c615020efd5: 2023-07-12 08:18:42,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:42,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:42,975 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dbf07a818de943134f8b97cf484f8008, disabling compactions & flushes 2023-07-12 08:18:42,975 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. 2023-07-12 08:18:42,975 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. 2023-07-12 08:18:42,975 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. after waiting 0 ms 2023-07-12 08:18:42,975 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. 2023-07-12 08:18:42,975 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=86ddc0ece211348fcb15dbd8ccf76958, regionState=CLOSED 2023-07-12 08:18:42,975 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922975"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149922975"}]},"ts":"1689149922975"} 2023-07-12 08:18:42,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:42,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:42,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ee537927d1e2ca3b51d7c8f601951d44, disabling compactions & flushes 2023-07-12 08:18:42,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. 2023-07-12 08:18:42,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. 2023-07-12 08:18:42,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. after waiting 0 ms 2023-07-12 08:18:42,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. 2023-07-12 08:18:42,977 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=6dc2dd2ff8c53d54627c4c615020efd5, regionState=CLOSED 2023-07-12 08:18:42,977 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922977"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149922977"}]},"ts":"1689149922977"} 2023-07-12 08:18:42,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:42,981 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008. 2023-07-12 08:18:42,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dbf07a818de943134f8b97cf484f8008: 2023-07-12 08:18:42,983 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:42,985 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=dbf07a818de943134f8b97cf484f8008, regionState=CLOSED 2023-07-12 08:18:42,985 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689149922985"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149922985"}]},"ts":"1689149922985"} 2023-07-12 08:18:42,985 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=150 2023-07-12 08:18:42,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=148 2023-07-12 08:18:42,985 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=150, state=SUCCESS; CloseRegionProcedure 86ddc0ece211348fcb15dbd8ccf76958, server=jenkins-hbase4.apache.org,42347,1689149897465 in 165 msec 2023-07-12 08:18:42,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=148, state=SUCCESS; CloseRegionProcedure 6dc2dd2ff8c53d54627c4c615020efd5, server=jenkins-hbase4.apache.org,41817,1689149901106 in 164 msec 2023-07-12 08:18:42,988 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6dc2dd2ff8c53d54627c4c615020efd5, UNASSIGN in 175 msec 2023-07-12 08:18:42,988 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=86ddc0ece211348fcb15dbd8ccf76958, UNASSIGN in 176 msec 2023-07-12 08:18:42,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:42,989 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=149 2023-07-12 08:18:42,989 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=149, state=SUCCESS; CloseRegionProcedure dbf07a818de943134f8b97cf484f8008, server=jenkins-hbase4.apache.org,42347,1689149897465 in 173 msec 2023-07-12 08:18:42,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44. 2023-07-12 08:18:42,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ee537927d1e2ca3b51d7c8f601951d44: 2023-07-12 08:18:42,991 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dbf07a818de943134f8b97cf484f8008, UNASSIGN in 179 msec 2023-07-12 08:18:42,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:42,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 07c5451afa126586aa359b48cece9399 2023-07-12 08:18:42,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 07c5451afa126586aa359b48cece9399, disabling compactions & flushes 2023-07-12 08:18:42,992 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. 2023-07-12 08:18:42,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. 2023-07-12 08:18:42,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. after waiting 0 ms 2023-07-12 08:18:42,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. 2023-07-12 08:18:42,992 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=ee537927d1e2ca3b51d7c8f601951d44, regionState=CLOSED 2023-07-12 08:18:42,993 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689149922992"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149922992"}]},"ts":"1689149922992"} 2023-07-12 08:18:42,995 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=151 2023-07-12 08:18:42,995 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=151, state=SUCCESS; CloseRegionProcedure ee537927d1e2ca3b51d7c8f601951d44, server=jenkins-hbase4.apache.org,41817,1689149901106 in 178 msec 2023-07-12 08:18:42,996 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ee537927d1e2ca3b51d7c8f601951d44, UNASSIGN in 185 msec 2023-07-12 08:18:42,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:42,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399. 2023-07-12 08:18:42,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 07c5451afa126586aa359b48cece9399: 2023-07-12 08:18:43,003 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 07c5451afa126586aa359b48cece9399 2023-07-12 08:18:43,003 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=07c5451afa126586aa359b48cece9399, regionState=CLOSED 2023-07-12 08:18:43,004 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689149923003"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149923003"}]},"ts":"1689149923003"} 2023-07-12 08:18:43,006 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=147 2023-07-12 08:18:43,006 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=147, state=SUCCESS; CloseRegionProcedure 07c5451afa126586aa359b48cece9399, server=jenkins-hbase4.apache.org,41817,1689149901106 in 192 msec 2023-07-12 08:18:43,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=146 2023-07-12 08:18:43,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=07c5451afa126586aa359b48cece9399, UNASSIGN in 196 msec 2023-07-12 08:18:43,009 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149923008"}]},"ts":"1689149923008"} 2023-07-12 08:18:43,010 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-12 08:18:43,011 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-12 08:18:43,014 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 209 msec 2023-07-12 08:18:43,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-12 08:18:43,109 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-12 08:18:43,109 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_890564596 2023-07-12 08:18:43,111 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_890564596 2023-07-12 08:18:43,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:43,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_890564596 2023-07-12 08:18:43,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:43,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:43,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-12 08:18:43,117 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_890564596, current retry=0 2023-07-12 08:18:43,117 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_890564596. 2023-07-12 08:18:43,117 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:43,121 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:43,121 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:43,126 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 08:18:43,126 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:43,129 INFO [Listener at localhost/44853] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 08:18:43,130 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-12 08:18:43,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:43,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 920 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:58548 deadline: 1689149983130, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-12 08:18:43,131 DEBUG [Listener at localhost/44853] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-12 08:18:43,132 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-12 08:18:43,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] procedure2.ProcedureExecutor(1029): Stored pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 08:18:43,136 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 08:18:43,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_890564596' 2023-07-12 08:18:43,137 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=158, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 08:18:43,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:43,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_890564596 2023-07-12 08:18:43,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:43,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:43,145 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399 2023-07-12 08:18:43,145 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:43,145 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:43,145 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:43,145 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:43,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-12 08:18:43,148 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5/recovered.edits] 2023-07-12 08:18:43,148 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399/recovered.edits] 2023-07-12 08:18:43,148 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958/recovered.edits] 2023-07-12 08:18:43,149 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008/recovered.edits] 2023-07-12 08:18:43,149 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44/f, FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44/recovered.edits] 2023-07-12 08:18:43,166 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958/recovered.edits/4.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958/recovered.edits/4.seqid 2023-07-12 08:18:43,166 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008/recovered.edits/4.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008/recovered.edits/4.seqid 2023-07-12 08:18:43,167 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/86ddc0ece211348fcb15dbd8ccf76958 2023-07-12 08:18:43,167 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399/recovered.edits/4.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399/recovered.edits/4.seqid 2023-07-12 08:18:43,168 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/dbf07a818de943134f8b97cf484f8008 2023-07-12 08:18:43,168 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44/recovered.edits/4.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44/recovered.edits/4.seqid 2023-07-12 08:18:43,168 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5/recovered.edits/4.seqid to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/archive/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5/recovered.edits/4.seqid 2023-07-12 08:18:43,169 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/07c5451afa126586aa359b48cece9399 2023-07-12 08:18:43,169 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/ee537927d1e2ca3b51d7c8f601951d44 2023-07-12 08:18:43,169 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/.tmp/data/default/Group_testDisabledTableMove/6dc2dd2ff8c53d54627c4c615020efd5 2023-07-12 08:18:43,169 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 08:18:43,173 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=158, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 08:18:43,176 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-12 08:18:43,181 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-12 08:18:43,184 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=158, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 08:18:43,185 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-12 08:18:43,185 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149923185"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:43,185 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149923185"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:43,185 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149923185"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:43,185 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149923185"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:43,185 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149923185"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:43,191 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 08:18:43,191 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 07c5451afa126586aa359b48cece9399, NAME => 'Group_testDisabledTableMove,,1689149922183.07c5451afa126586aa359b48cece9399.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 6dc2dd2ff8c53d54627c4c615020efd5, NAME => 'Group_testDisabledTableMove,aaaaa,1689149922183.6dc2dd2ff8c53d54627c4c615020efd5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => dbf07a818de943134f8b97cf484f8008, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689149922183.dbf07a818de943134f8b97cf484f8008.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 86ddc0ece211348fcb15dbd8ccf76958, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689149922183.86ddc0ece211348fcb15dbd8ccf76958.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => ee537927d1e2ca3b51d7c8f601951d44, NAME => 'Group_testDisabledTableMove,zzzzz,1689149922183.ee537927d1e2ca3b51d7c8f601951d44.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 08:18:43,191 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-12 08:18:43,191 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689149923191"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:43,193 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-12 08:18:43,195 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=158, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 08:18:43,196 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=158, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 63 msec 2023-07-12 08:18:43,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-12 08:18:43,248 INFO [Listener at localhost/44853] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 158 completed 2023-07-12 08:18:43,251 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:43,251 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:43,252 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:43,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:43,252 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:43,253 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:36999] to rsgroup default 2023-07-12 08:18:43,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:43,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_890564596 2023-07-12 08:18:43,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:43,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:43,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_890564596, current retry=0 2023-07-12 08:18:43,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36999,1689149897362, jenkins-hbase4.apache.org,38647,1689149897534] are moved back to Group_testDisabledTableMove_890564596 2023-07-12 08:18:43,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_890564596 => default 2023-07-12 08:18:43,258 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:43,259 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_890564596 2023-07-12 08:18:43,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:43,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:43,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 08:18:43,264 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:43,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:43,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:43,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:43,266 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:43,266 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:43,266 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:43,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:43,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:43,272 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:43,275 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:43,275 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:43,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:43,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:43,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:43,279 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:43,282 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:43,282 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:43,284 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:43,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:43,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 954 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151123283, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:43,284 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:43,286 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:43,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:43,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:43,286 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:43,287 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:43,287 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:43,306 INFO [Listener at localhost/44853] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=504 (was 502) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-171974648_17 at /127.0.0.1:54526 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2c378da6-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=769 (was 748) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=563 (was 563), ProcessCount=174 (was 174), AvailableMemoryMB=3216 (was 3286) 2023-07-12 08:18:43,306 WARN [Listener at localhost/44853] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-12 08:18:43,323 INFO [Listener at localhost/44853] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=504, OpenFileDescriptor=769, MaxFileDescriptor=60000, SystemLoadAverage=563, ProcessCount=174, AvailableMemoryMB=3215 2023-07-12 08:18:43,323 WARN [Listener at localhost/44853] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-12 08:18:43,323 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-12 08:18:43,327 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:43,327 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:43,328 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:43,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:43,328 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:43,328 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:43,328 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:43,329 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:43,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:43,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:43,333 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:43,336 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:43,337 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:43,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:43,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:43,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:43,343 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:43,345 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:43,345 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:43,347 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44301] to rsgroup master 2023-07-12 08:18:43,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:43,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] ipc.CallRunner(144): callId: 982 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:58548 deadline: 1689151123347, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. 2023-07-12 08:18:43,348 WARN [Listener at localhost/44853] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44301 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:43,349 INFO [Listener at localhost/44853] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:43,350 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:43,350 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:43,350 INFO [Listener at localhost/44853] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36999, jenkins-hbase4.apache.org:38647, jenkins-hbase4.apache.org:41817, jenkins-hbase4.apache.org:42347], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:43,351 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:43,351 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44301] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:43,351 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 08:18:43,351 INFO [Listener at localhost/44853] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 08:18:43,351 DEBUG [Listener at localhost/44853] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x62c69654 to 127.0.0.1:51057 2023-07-12 08:18:43,351 DEBUG [Listener at localhost/44853] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:43,352 DEBUG [Listener at localhost/44853] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 08:18:43,352 DEBUG [Listener at localhost/44853] util.JVMClusterUtil(257): Found active master hash=2036672239, stopped=false 2023-07-12 08:18:43,352 DEBUG [Listener at localhost/44853] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 08:18:43,353 DEBUG [Listener at localhost/44853] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 08:18:43,353 INFO [Listener at localhost/44853] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44301,1689149895428 2023-07-12 08:18:43,355 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:43,355 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:43,355 INFO [Listener at localhost/44853] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 08:18:43,355 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:43,355 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:43,355 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:43,355 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:43,355 DEBUG [Listener at localhost/44853] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2c58bd27 to 127.0.0.1:51057 2023-07-12 08:18:43,355 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:43,356 DEBUG [Listener at localhost/44853] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:43,356 INFO [Listener at localhost/44853] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36999,1689149897362' ***** 2023-07-12 08:18:43,356 INFO [Listener at localhost/44853] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 08:18:43,356 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:43,356 INFO [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:43,356 INFO [Listener at localhost/44853] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42347,1689149897465' ***** 2023-07-12 08:18:43,357 INFO [Listener at localhost/44853] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 08:18:43,357 INFO [Listener at localhost/44853] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38647,1689149897534' ***** 2023-07-12 08:18:43,357 INFO [Listener at localhost/44853] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 08:18:43,357 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:43,358 INFO [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:43,358 INFO [Listener at localhost/44853] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41817,1689149901106' ***** 2023-07-12 08:18:43,359 INFO [Listener at localhost/44853] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 08:18:43,359 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:43,359 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:43,359 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:43,360 INFO [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:43,376 INFO [RS:3;jenkins-hbase4:41817] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@54c35b24{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:43,376 INFO [RS:2;jenkins-hbase4:38647] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@470fdab8{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:43,376 INFO [RS:0;jenkins-hbase4:36999] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2e98cdce{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:43,376 INFO [RS:1;jenkins-hbase4:42347] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@66df3ef2{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:43,380 INFO [RS:2;jenkins-hbase4:38647] server.AbstractConnector(383): Stopped ServerConnector@78d2fac4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:43,380 INFO [RS:0;jenkins-hbase4:36999] server.AbstractConnector(383): Stopped ServerConnector@6d447d66{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:43,380 INFO [RS:2;jenkins-hbase4:38647] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:43,380 INFO [RS:0;jenkins-hbase4:36999] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:43,380 INFO [RS:3;jenkins-hbase4:41817] server.AbstractConnector(383): Stopped ServerConnector@7adb5e78{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:43,380 INFO [RS:1;jenkins-hbase4:42347] server.AbstractConnector(383): Stopped ServerConnector@44d8fb02{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:43,382 INFO [RS:0;jenkins-hbase4:36999] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6ce589e{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:43,381 INFO [RS:2;jenkins-hbase4:38647] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7c1f78c1{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:43,381 INFO [RS:3;jenkins-hbase4:41817] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:43,382 INFO [RS:0;jenkins-hbase4:36999] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6afee7fb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:43,382 INFO [RS:1;jenkins-hbase4:42347] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:43,383 INFO [RS:3;jenkins-hbase4:41817] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b33dcd2{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:43,383 INFO [RS:2;jenkins-hbase4:38647] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7605c194{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:43,384 INFO [RS:1;jenkins-hbase4:42347] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@35b16dd4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:43,385 INFO [RS:3;jenkins-hbase4:41817] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1755bd06{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:43,385 INFO [RS:1;jenkins-hbase4:42347] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ee296f1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:43,388 INFO [RS:3;jenkins-hbase4:41817] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 08:18:43,388 INFO [RS:1;jenkins-hbase4:42347] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 08:18:43,388 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 08:18:43,389 INFO [RS:3;jenkins-hbase4:41817] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 08:18:43,389 INFO [RS:3;jenkins-hbase4:41817] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 08:18:43,389 INFO [RS:1;jenkins-hbase4:42347] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 08:18:43,389 INFO [RS:1;jenkins-hbase4:42347] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 08:18:43,389 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 08:18:43,389 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(3305): Received CLOSE for 2de6d40274685ae9edc330d242c58d7b 2023-07-12 08:18:43,389 INFO [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(3305): Received CLOSE for f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:43,389 INFO [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:43,389 INFO [RS:0;jenkins-hbase4:36999] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 08:18:43,389 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(3305): Received CLOSE for e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:43,391 INFO [RS:0;jenkins-hbase4:36999] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 08:18:43,391 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(3305): Received CLOSE for ae71929909c3f585c1f0e7f3408f83d2 2023-07-12 08:18:43,390 DEBUG [RS:3;jenkins-hbase4:41817] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x00af7513 to 127.0.0.1:51057 2023-07-12 08:18:43,391 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 08:18:43,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f34890516978f0b2fa47b027a21eccfa, disabling compactions & flushes 2023-07-12 08:18:43,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2de6d40274685ae9edc330d242c58d7b, disabling compactions & flushes 2023-07-12 08:18:43,389 INFO [RS:2;jenkins-hbase4:38647] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 08:18:43,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:43,391 INFO [RS:2;jenkins-hbase4:38647] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 08:18:43,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:43,391 INFO [RS:2;jenkins-hbase4:38647] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 08:18:43,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. after waiting 0 ms 2023-07-12 08:18:43,391 INFO [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:43,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:43,392 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:43,391 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 08:18:43,392 DEBUG [RS:2;jenkins-hbase4:38647] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0b1d0fb0 to 127.0.0.1:51057 2023-07-12 08:18:43,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:43,391 DEBUG [RS:3;jenkins-hbase4:41817] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:43,391 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:43,392 DEBUG [RS:2;jenkins-hbase4:38647] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:43,392 INFO [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38647,1689149897534; all regions closed. 2023-07-12 08:18:43,392 INFO [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 08:18:43,392 DEBUG [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1478): Online Regions={f34890516978f0b2fa47b027a21eccfa=testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa.} 2023-07-12 08:18:43,393 DEBUG [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1504): Waiting on f34890516978f0b2fa47b027a21eccfa 2023-07-12 08:18:43,391 INFO [RS:0;jenkins-hbase4:36999] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 08:18:43,392 DEBUG [RS:1;jenkins-hbase4:42347] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x257afa1a to 127.0.0.1:51057 2023-07-12 08:18:43,392 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:43,393 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:43,392 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:43,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. after waiting 0 ms 2023-07-12 08:18:43,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:43,393 DEBUG [RS:1;jenkins-hbase4:42347] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:43,394 INFO [RS:1;jenkins-hbase4:42347] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 08:18:43,394 INFO [RS:1;jenkins-hbase4:42347] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 08:18:43,394 INFO [RS:1;jenkins-hbase4:42347] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 08:18:43,394 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 08:18:43,393 INFO [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:43,394 DEBUG [RS:0;jenkins-hbase4:36999] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x004452b0 to 127.0.0.1:51057 2023-07-12 08:18:43,394 DEBUG [RS:0;jenkins-hbase4:36999] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:43,394 INFO [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36999,1689149897362; all regions closed. 2023-07-12 08:18:43,399 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:43,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/testRename/f34890516978f0b2fa47b027a21eccfa/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 08:18:43,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/default/unmovedTable/2de6d40274685ae9edc330d242c58d7b/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 08:18:43,409 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:43,410 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 08:18:43,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f34890516978f0b2fa47b027a21eccfa: 2023-07-12 08:18:43,410 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 08:18:43,410 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 08:18:43,410 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 08:18:43,410 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 08:18:43,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689149916560.f34890516978f0b2fa47b027a21eccfa. 2023-07-12 08:18:43,410 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=82.47 KB heapSize=130.27 KB 2023-07-12 08:18:43,410 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-12 08:18:43,410 DEBUG [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1478): Online Regions={2de6d40274685ae9edc330d242c58d7b=unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b., 1588230740=hbase:meta,,1.1588230740, e819f13729c8274f2f0efb5a42e75184=hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184., ae71929909c3f585c1f0e7f3408f83d2=hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2.} 2023-07-12 08:18:43,410 DEBUG [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1504): Waiting on 1588230740, 2de6d40274685ae9edc330d242c58d7b, ae71929909c3f585c1f0e7f3408f83d2, e819f13729c8274f2f0efb5a42e75184 2023-07-12 08:18:43,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:43,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2de6d40274685ae9edc330d242c58d7b: 2023-07-12 08:18:43,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689149918215.2de6d40274685ae9edc330d242c58d7b. 2023-07-12 08:18:43,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e819f13729c8274f2f0efb5a42e75184, disabling compactions & flushes 2023-07-12 08:18:43,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:43,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:43,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. after waiting 0 ms 2023-07-12 08:18:43,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:43,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e819f13729c8274f2f0efb5a42e75184 1/1 column families, dataSize=22.07 KB heapSize=36.54 KB 2023-07-12 08:18:43,432 DEBUG [RS:2;jenkins-hbase4:38647] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/oldWALs 2023-07-12 08:18:43,432 INFO [RS:2;jenkins-hbase4:38647] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38647%2C1689149897534:(num 1689149899509) 2023-07-12 08:18:43,432 DEBUG [RS:2;jenkins-hbase4:38647] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:43,432 INFO [RS:2;jenkins-hbase4:38647] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:43,432 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/WALs/jenkins-hbase4.apache.org,36999,1689149897362/jenkins-hbase4.apache.org%2C36999%2C1689149897362.1689149899509 not finished, retry = 0 2023-07-12 08:18:43,433 INFO [RS:2;jenkins-hbase4:38647] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:43,433 INFO [RS:2;jenkins-hbase4:38647] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 08:18:43,433 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:43,433 INFO [RS:2;jenkins-hbase4:38647] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 08:18:43,433 INFO [RS:2;jenkins-hbase4:38647] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 08:18:43,436 INFO [RS:2;jenkins-hbase4:38647] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38647 2023-07-12 08:18:43,481 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.48 KB at sequenceid=212 (bloomFilter=false), to=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/.tmp/info/ef0f50213535496da6f7caa4a2000cb9 2023-07-12 08:18:43,482 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=107 (bloomFilter=true), to=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/.tmp/m/cf789a7b2237483d93a28dfa390c3721 2023-07-12 08:18:43,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cf789a7b2237483d93a28dfa390c3721 2023-07-12 08:18:43,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/.tmp/m/cf789a7b2237483d93a28dfa390c3721 as hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m/cf789a7b2237483d93a28dfa390c3721 2023-07-12 08:18:43,496 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef0f50213535496da6f7caa4a2000cb9 2023-07-12 08:18:43,501 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:43,501 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:43,501 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:43,501 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:43,501 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:43,501 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:43,501 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:43,501 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38647,1689149897534 2023-07-12 08:18:43,502 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:43,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cf789a7b2237483d93a28dfa390c3721 2023-07-12 08:18:43,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/m/cf789a7b2237483d93a28dfa390c3721, entries=22, sequenceid=107, filesize=5.9 K 2023-07-12 08:18:43,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22601, heapSize ~36.52 KB/37400, currentSize=0 B/0 for e819f13729c8274f2f0efb5a42e75184 in 84ms, sequenceid=107, compaction requested=true 2023-07-12 08:18:43,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 08:18:43,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/rsgroup/e819f13729c8274f2f0efb5a42e75184/recovered.edits/110.seqid, newMaxSeqId=110, maxSeqId=35 2023-07-12 08:18:43,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 08:18:43,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:43,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e819f13729c8274f2f0efb5a42e75184: 2023-07-12 08:18:43,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689149900135.e819f13729c8274f2f0efb5a42e75184. 2023-07-12 08:18:43,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ae71929909c3f585c1f0e7f3408f83d2, disabling compactions & flushes 2023-07-12 08:18:43,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:43,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:43,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. after waiting 0 ms 2023-07-12 08:18:43,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:43,525 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=212 (bloomFilter=false), to=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/.tmp/rep_barrier/d5e590a7935344e3a3f430a47d6d4c27 2023-07-12 08:18:43,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/namespace/ae71929909c3f585c1f0e7f3408f83d2/recovered.edits/15.seqid, newMaxSeqId=15, maxSeqId=12 2023-07-12 08:18:43,533 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d5e590a7935344e3a3f430a47d6d4c27 2023-07-12 08:18:43,535 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:43,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ae71929909c3f585c1f0e7f3408f83d2: 2023-07-12 08:18:43,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689149899938.ae71929909c3f585c1f0e7f3408f83d2. 2023-07-12 08:18:43,536 DEBUG [RS:0;jenkins-hbase4:36999] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/oldWALs 2023-07-12 08:18:43,536 INFO [RS:0;jenkins-hbase4:36999] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36999%2C1689149897362:(num 1689149899509) 2023-07-12 08:18:43,536 DEBUG [RS:0;jenkins-hbase4:36999] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:43,536 INFO [RS:0;jenkins-hbase4:36999] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:43,536 INFO [RS:0;jenkins-hbase4:36999] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:43,537 INFO [RS:0;jenkins-hbase4:36999] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 08:18:43,537 INFO [RS:0;jenkins-hbase4:36999] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 08:18:43,537 INFO [RS:0;jenkins-hbase4:36999] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 08:18:43,537 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:43,538 INFO [RS:0;jenkins-hbase4:36999] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36999 2023-07-12 08:18:43,549 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=212 (bloomFilter=false), to=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/.tmp/table/e31f59fcaa8d4c6a9d78f7c1c241dfe5 2023-07-12 08:18:43,555 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e31f59fcaa8d4c6a9d78f7c1c241dfe5 2023-07-12 08:18:43,556 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/.tmp/info/ef0f50213535496da6f7caa4a2000cb9 as hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/info/ef0f50213535496da6f7caa4a2000cb9 2023-07-12 08:18:43,561 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef0f50213535496da6f7caa4a2000cb9 2023-07-12 08:18:43,561 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/info/ef0f50213535496da6f7caa4a2000cb9, entries=108, sequenceid=212, filesize=17.2 K 2023-07-12 08:18:43,562 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/.tmp/rep_barrier/d5e590a7935344e3a3f430a47d6d4c27 as hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/rep_barrier/d5e590a7935344e3a3f430a47d6d4c27 2023-07-12 08:18:43,568 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d5e590a7935344e3a3f430a47d6d4c27 2023-07-12 08:18:43,569 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/rep_barrier/d5e590a7935344e3a3f430a47d6d4c27, entries=18, sequenceid=212, filesize=6.9 K 2023-07-12 08:18:43,569 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/.tmp/table/e31f59fcaa8d4c6a9d78f7c1c241dfe5 as hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/table/e31f59fcaa8d4c6a9d78f7c1c241dfe5 2023-07-12 08:18:43,576 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e31f59fcaa8d4c6a9d78f7c1c241dfe5 2023-07-12 08:18:43,577 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/table/e31f59fcaa8d4c6a9d78f7c1c241dfe5, entries=31, sequenceid=212, filesize=7.4 K 2023-07-12 08:18:43,578 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~82.47 KB/84449, heapSize ~130.23 KB/133352, currentSize=0 B/0 for 1588230740 in 167ms, sequenceid=212, compaction requested=false 2023-07-12 08:18:43,586 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/data/hbase/meta/1588230740/recovered.edits/215.seqid, newMaxSeqId=215, maxSeqId=1 2023-07-12 08:18:43,586 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 08:18:43,587 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 08:18:43,587 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 08:18:43,587 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 08:18:43,593 INFO [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41817,1689149901106; all regions closed. 2023-07-12 08:18:43,599 DEBUG [RS:3;jenkins-hbase4:41817] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/oldWALs 2023-07-12 08:18:43,599 INFO [RS:3;jenkins-hbase4:41817] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41817%2C1689149901106:(num 1689149901417) 2023-07-12 08:18:43,599 DEBUG [RS:3;jenkins-hbase4:41817] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:43,599 INFO [RS:3;jenkins-hbase4:41817] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:43,600 INFO [RS:3;jenkins-hbase4:41817] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:43,600 INFO [RS:3;jenkins-hbase4:41817] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 08:18:43,600 INFO [RS:3;jenkins-hbase4:41817] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 08:18:43,600 INFO [RS:3;jenkins-hbase4:41817] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 08:18:43,600 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:43,600 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:43,601 INFO [RS:3;jenkins-hbase4:41817] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41817 2023-07-12 08:18:43,600 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:43,601 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:43,601 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36999,1689149897362 2023-07-12 08:18:43,602 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36999,1689149897362] 2023-07-12 08:18:43,602 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36999,1689149897362; numProcessing=1 2023-07-12 08:18:43,603 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:43,603 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41817,1689149901106 2023-07-12 08:18:43,603 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:43,604 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36999,1689149897362 already deleted, retry=false 2023-07-12 08:18:43,604 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36999,1689149897362 expired; onlineServers=3 2023-07-12 08:18:43,605 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38647,1689149897534] 2023-07-12 08:18:43,605 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38647,1689149897534; numProcessing=2 2023-07-12 08:18:43,610 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42347,1689149897465; all regions closed. 2023-07-12 08:18:43,616 DEBUG [RS:1;jenkins-hbase4:42347] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/oldWALs 2023-07-12 08:18:43,616 INFO [RS:1;jenkins-hbase4:42347] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42347%2C1689149897465.meta:.meta(num 1689149899710) 2023-07-12 08:18:43,621 DEBUG [RS:1;jenkins-hbase4:42347] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/oldWALs 2023-07-12 08:18:43,622 INFO [RS:1;jenkins-hbase4:42347] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42347%2C1689149897465:(num 1689149899509) 2023-07-12 08:18:43,622 DEBUG [RS:1;jenkins-hbase4:42347] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:43,622 INFO [RS:1;jenkins-hbase4:42347] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:43,622 INFO [RS:1;jenkins-hbase4:42347] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:43,622 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:43,623 INFO [RS:1;jenkins-hbase4:42347] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42347 2023-07-12 08:18:43,703 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:43,704 INFO [RS:0;jenkins-hbase4:36999] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36999,1689149897362; zookeeper connection closed. 2023-07-12 08:18:43,704 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:36999-0x101589c725b0001, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:43,704 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@43c63d4c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@43c63d4c 2023-07-12 08:18:43,704 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:43,705 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42347,1689149897465 2023-07-12 08:18:43,705 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38647,1689149897534 already deleted, retry=false 2023-07-12 08:18:43,705 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38647,1689149897534 expired; onlineServers=2 2023-07-12 08:18:43,706 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42347,1689149897465] 2023-07-12 08:18:43,706 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42347,1689149897465; numProcessing=3 2023-07-12 08:18:43,708 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42347,1689149897465 already deleted, retry=false 2023-07-12 08:18:43,709 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42347,1689149897465 expired; onlineServers=1 2023-07-12 08:18:43,709 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41817,1689149901106] 2023-07-12 08:18:43,709 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41817,1689149901106; numProcessing=4 2023-07-12 08:18:43,710 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41817,1689149901106 already deleted, retry=false 2023-07-12 08:18:43,710 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41817,1689149901106 expired; onlineServers=0 2023-07-12 08:18:43,710 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44301,1689149895428' ***** 2023-07-12 08:18:43,710 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 08:18:43,711 DEBUG [M:0;jenkins-hbase4:44301] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38582cd4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:43,711 INFO [M:0;jenkins-hbase4:44301] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:43,713 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:43,713 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:43,713 INFO [M:0;jenkins-hbase4:44301] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@64480317{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 08:18:43,714 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:43,714 INFO [M:0;jenkins-hbase4:44301] server.AbstractConnector(383): Stopped ServerConnector@71df00d8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:43,714 INFO [M:0;jenkins-hbase4:44301] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:43,715 INFO [M:0;jenkins-hbase4:44301] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5320c268{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:43,715 INFO [M:0;jenkins-hbase4:44301] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7a39ade6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:43,715 INFO [M:0;jenkins-hbase4:44301] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44301,1689149895428 2023-07-12 08:18:43,716 INFO [M:0;jenkins-hbase4:44301] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44301,1689149895428; all regions closed. 2023-07-12 08:18:43,716 DEBUG [M:0;jenkins-hbase4:44301] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:43,716 INFO [M:0;jenkins-hbase4:44301] master.HMaster(1491): Stopping master jetty server 2023-07-12 08:18:43,716 INFO [M:0;jenkins-hbase4:44301] server.AbstractConnector(383): Stopped ServerConnector@3a033f72{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:43,717 DEBUG [M:0;jenkins-hbase4:44301] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 08:18:43,717 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 08:18:43,717 DEBUG [M:0;jenkins-hbase4:44301] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 08:18:43,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689149899051] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689149899051,5,FailOnTimeoutGroup] 2023-07-12 08:18:43,717 INFO [M:0;jenkins-hbase4:44301] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 08:18:43,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689149899052] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689149899052,5,FailOnTimeoutGroup] 2023-07-12 08:18:43,717 INFO [M:0;jenkins-hbase4:44301] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 08:18:43,717 INFO [M:0;jenkins-hbase4:44301] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-12 08:18:43,717 DEBUG [M:0;jenkins-hbase4:44301] master.HMaster(1512): Stopping service threads 2023-07-12 08:18:43,717 INFO [M:0;jenkins-hbase4:44301] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 08:18:43,718 ERROR [M:0;jenkins-hbase4:44301] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-12 08:18:43,719 INFO [M:0;jenkins-hbase4:44301] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 08:18:43,719 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 08:18:43,719 DEBUG [M:0;jenkins-hbase4:44301] zookeeper.ZKUtil(398): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 08:18:43,719 WARN [M:0;jenkins-hbase4:44301] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 08:18:43,719 INFO [M:0;jenkins-hbase4:44301] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 08:18:43,719 INFO [M:0;jenkins-hbase4:44301] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 08:18:43,720 DEBUG [M:0;jenkins-hbase4:44301] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 08:18:43,720 INFO [M:0;jenkins-hbase4:44301] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:43,720 DEBUG [M:0;jenkins-hbase4:44301] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:43,720 DEBUG [M:0;jenkins-hbase4:44301] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 08:18:43,720 DEBUG [M:0;jenkins-hbase4:44301] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:43,720 INFO [M:0;jenkins-hbase4:44301] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=529.08 KB heapSize=633.28 KB 2023-07-12 08:18:43,740 INFO [M:0;jenkins-hbase4:44301] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=529.08 KB at sequenceid=1176 (bloomFilter=true), to=hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/73a3fdebbf724115b237f8566b313e90 2023-07-12 08:18:43,751 DEBUG [M:0;jenkins-hbase4:44301] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/73a3fdebbf724115b237f8566b313e90 as hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/73a3fdebbf724115b237f8566b313e90 2023-07-12 08:18:43,757 INFO [M:0;jenkins-hbase4:44301] regionserver.HStore(1080): Added hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/73a3fdebbf724115b237f8566b313e90, entries=157, sequenceid=1176, filesize=27.6 K 2023-07-12 08:18:43,758 INFO [M:0;jenkins-hbase4:44301] regionserver.HRegion(2948): Finished flush of dataSize ~529.08 KB/541777, heapSize ~633.27 KB/648464, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 38ms, sequenceid=1176, compaction requested=false 2023-07-12 08:18:43,760 INFO [M:0;jenkins-hbase4:44301] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:43,760 DEBUG [M:0;jenkins-hbase4:44301] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 08:18:43,764 INFO [M:0;jenkins-hbase4:44301] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 08:18:43,764 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:43,764 INFO [M:0;jenkins-hbase4:44301] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44301 2023-07-12 08:18:43,766 DEBUG [M:0;jenkins-hbase4:44301] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44301,1689149895428 already deleted, retry=false 2023-07-12 08:18:43,855 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:43,855 INFO [RS:1;jenkins-hbase4:42347] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42347,1689149897465; zookeeper connection closed. 2023-07-12 08:18:43,855 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:42347-0x101589c725b0002, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:43,856 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@55739154] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@55739154 2023-07-12 08:18:43,955 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:43,955 INFO [RS:3;jenkins-hbase4:41817] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41817,1689149901106; zookeeper connection closed. 2023-07-12 08:18:43,956 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:41817-0x101589c725b000b, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:43,956 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@34159c37] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@34159c37 2023-07-12 08:18:44,056 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:44,056 INFO [RS:2;jenkins-hbase4:38647] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38647,1689149897534; zookeeper connection closed. 2023-07-12 08:18:44,056 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): regionserver:38647-0x101589c725b0003, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:44,056 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@188e6092] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@188e6092 2023-07-12 08:18:44,056 INFO [Listener at localhost/44853] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 08:18:44,156 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:44,156 INFO [M:0;jenkins-hbase4:44301] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44301,1689149895428; zookeeper connection closed. 2023-07-12 08:18:44,156 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): master:44301-0x101589c725b0000, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:44,158 WARN [Listener at localhost/44853] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 08:18:44,166 INFO [Listener at localhost/44853] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:44,270 WARN [BP-1887393900-172.31.14.131-1689149891542 heartbeating to localhost/127.0.0.1:42813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 08:18:44,270 WARN [BP-1887393900-172.31.14.131-1689149891542 heartbeating to localhost/127.0.0.1:42813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1887393900-172.31.14.131-1689149891542 (Datanode Uuid 7a7b6c1a-e024-4635-8442-040a2924521b) service to localhost/127.0.0.1:42813 2023-07-12 08:18:44,272 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/dfs/data/data5/current/BP-1887393900-172.31.14.131-1689149891542] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:44,272 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/dfs/data/data6/current/BP-1887393900-172.31.14.131-1689149891542] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:44,275 WARN [Listener at localhost/44853] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 08:18:44,278 INFO [Listener at localhost/44853] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:44,381 WARN [BP-1887393900-172.31.14.131-1689149891542 heartbeating to localhost/127.0.0.1:42813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 08:18:44,381 WARN [BP-1887393900-172.31.14.131-1689149891542 heartbeating to localhost/127.0.0.1:42813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1887393900-172.31.14.131-1689149891542 (Datanode Uuid eb0c195d-22c8-4e63-8cc0-6218a9f8c698) service to localhost/127.0.0.1:42813 2023-07-12 08:18:44,382 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/dfs/data/data3/current/BP-1887393900-172.31.14.131-1689149891542] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:44,382 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/dfs/data/data4/current/BP-1887393900-172.31.14.131-1689149891542] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:44,383 WARN [Listener at localhost/44853] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 08:18:44,386 INFO [Listener at localhost/44853] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:44,489 WARN [BP-1887393900-172.31.14.131-1689149891542 heartbeating to localhost/127.0.0.1:42813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 08:18:44,489 WARN [BP-1887393900-172.31.14.131-1689149891542 heartbeating to localhost/127.0.0.1:42813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1887393900-172.31.14.131-1689149891542 (Datanode Uuid 686f060b-5b06-4943-8d33-0849b15379b1) service to localhost/127.0.0.1:42813 2023-07-12 08:18:44,490 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/dfs/data/data1/current/BP-1887393900-172.31.14.131-1689149891542] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:44,490 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/cluster_4e6585a5-61f4-6c33-1fee-c9320c3d1c19/dfs/data/data2/current/BP-1887393900-172.31.14.131-1689149891542] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:44,517 INFO [Listener at localhost/44853] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:44,635 INFO [Listener at localhost/44853] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 08:18:44,688 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 08:18:44,689 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 08:18:44,689 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.log.dir so I do NOT create it in target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae 2023-07-12 08:18:44,689 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/580f474a-5a30-62f0-bdd8-a46943fc82c6/hadoop.tmp.dir so I do NOT create it in target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae 2023-07-12 08:18:44,689 INFO [Listener at localhost/44853] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/cluster_f95a6f5f-e6de-54b0-cd8f-3d48d26e596e, deleteOnExit=true 2023-07-12 08:18:44,689 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 08:18:44,689 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/test.cache.data in system properties and HBase conf 2023-07-12 08:18:44,689 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 08:18:44,689 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.log.dir in system properties and HBase conf 2023-07-12 08:18:44,689 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 08:18:44,690 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 08:18:44,690 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 08:18:44,690 DEBUG [Listener at localhost/44853] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 08:18:44,690 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 08:18:44,690 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 08:18:44,690 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 08:18:44,690 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 08:18:44,690 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 08:18:44,690 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 08:18:44,691 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 08:18:44,691 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 08:18:44,691 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 08:18:44,691 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/nfs.dump.dir in system properties and HBase conf 2023-07-12 08:18:44,691 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/java.io.tmpdir in system properties and HBase conf 2023-07-12 08:18:44,691 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 08:18:44,691 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 08:18:44,691 INFO [Listener at localhost/44853] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 08:18:44,695 WARN [Listener at localhost/44853] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 08:18:44,696 WARN [Listener at localhost/44853] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 08:18:44,732 DEBUG [Listener at localhost/44853-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101589c725b000a, quorum=127.0.0.1:51057, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 08:18:44,732 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101589c725b000a, quorum=127.0.0.1:51057, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 08:18:44,745 WARN [Listener at localhost/44853] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:44,748 INFO [Listener at localhost/44853] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:44,756 INFO [Listener at localhost/44853] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/java.io.tmpdir/Jetty_localhost_36821_hdfs____1647ha/webapp 2023-07-12 08:18:44,869 INFO [Listener at localhost/44853] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36821 2023-07-12 08:18:44,874 WARN [Listener at localhost/44853] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 08:18:44,874 WARN [Listener at localhost/44853] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 08:18:44,936 WARN [Listener at localhost/41039] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:44,951 WARN [Listener at localhost/41039] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 08:18:44,953 WARN [Listener at localhost/41039] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:44,954 INFO [Listener at localhost/41039] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:44,959 INFO [Listener at localhost/41039] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/java.io.tmpdir/Jetty_localhost_42399_datanode____n5rhks/webapp 2023-07-12 08:18:45,060 INFO [Listener at localhost/41039] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42399 2023-07-12 08:18:45,069 WARN [Listener at localhost/35821] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:45,086 WARN [Listener at localhost/35821] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 08:18:45,088 WARN [Listener at localhost/35821] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:45,089 INFO [Listener at localhost/35821] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:45,094 INFO [Listener at localhost/35821] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/java.io.tmpdir/Jetty_localhost_37035_datanode____odeget/webapp 2023-07-12 08:18:45,180 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7d765546050ff8db: Processing first storage report for DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9 from datanode 829e6aa1-f50e-41de-9a02-e7f35ecfc368 2023-07-12 08:18:45,181 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7d765546050ff8db: from storage DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9 node DatanodeRegistration(127.0.0.1:41229, datanodeUuid=829e6aa1-f50e-41de-9a02-e7f35ecfc368, infoPort=46017, infoSecurePort=0, ipcPort=35821, storageInfo=lv=-57;cid=testClusterID;nsid=1583252746;c=1689149924699), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 08:18:45,181 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7d765546050ff8db: Processing first storage report for DS-0364afbb-0268-45d9-a4fe-d16dbb811914 from datanode 829e6aa1-f50e-41de-9a02-e7f35ecfc368 2023-07-12 08:18:45,181 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7d765546050ff8db: from storage DS-0364afbb-0268-45d9-a4fe-d16dbb811914 node DatanodeRegistration(127.0.0.1:41229, datanodeUuid=829e6aa1-f50e-41de-9a02-e7f35ecfc368, infoPort=46017, infoSecurePort=0, ipcPort=35821, storageInfo=lv=-57;cid=testClusterID;nsid=1583252746;c=1689149924699), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:45,219 INFO [Listener at localhost/35821] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37035 2023-07-12 08:18:45,226 WARN [Listener at localhost/41637] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:45,253 WARN [Listener at localhost/41637] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 08:18:45,258 WARN [Listener at localhost/41637] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:45,259 INFO [Listener at localhost/41637] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:45,270 INFO [Listener at localhost/41637] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/java.io.tmpdir/Jetty_localhost_36105_datanode____bssm56/webapp 2023-07-12 08:18:45,355 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x256b2fae27ed0c01: Processing first storage report for DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00 from datanode 24da76ec-e760-43b4-aba0-cb849c5ee77a 2023-07-12 08:18:45,356 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x256b2fae27ed0c01: from storage DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00 node DatanodeRegistration(127.0.0.1:45421, datanodeUuid=24da76ec-e760-43b4-aba0-cb849c5ee77a, infoPort=46683, infoSecurePort=0, ipcPort=41637, storageInfo=lv=-57;cid=testClusterID;nsid=1583252746;c=1689149924699), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 08:18:45,356 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x256b2fae27ed0c01: Processing first storage report for DS-f38ae11a-c065-4a98-af5f-1ba644e7722c from datanode 24da76ec-e760-43b4-aba0-cb849c5ee77a 2023-07-12 08:18:45,356 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x256b2fae27ed0c01: from storage DS-f38ae11a-c065-4a98-af5f-1ba644e7722c node DatanodeRegistration(127.0.0.1:45421, datanodeUuid=24da76ec-e760-43b4-aba0-cb849c5ee77a, infoPort=46683, infoSecurePort=0, ipcPort=41637, storageInfo=lv=-57;cid=testClusterID;nsid=1583252746;c=1689149924699), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:45,398 INFO [Listener at localhost/41637] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36105 2023-07-12 08:18:45,407 WARN [Listener at localhost/36551] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:45,563 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5eac8386584c47d5: Processing first storage report for DS-01a317f0-d8f0-4962-9a54-f71df947c3b2 from datanode bc8298c0-2d4e-4b98-bc4f-3518c111353c 2023-07-12 08:18:45,563 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5eac8386584c47d5: from storage DS-01a317f0-d8f0-4962-9a54-f71df947c3b2 node DatanodeRegistration(127.0.0.1:35229, datanodeUuid=bc8298c0-2d4e-4b98-bc4f-3518c111353c, infoPort=33743, infoSecurePort=0, ipcPort=36551, storageInfo=lv=-57;cid=testClusterID;nsid=1583252746;c=1689149924699), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:45,563 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5eac8386584c47d5: Processing first storage report for DS-5ea40ee2-03c1-427d-8168-fef65c6d3949 from datanode bc8298c0-2d4e-4b98-bc4f-3518c111353c 2023-07-12 08:18:45,563 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5eac8386584c47d5: from storage DS-5ea40ee2-03c1-427d-8168-fef65c6d3949 node DatanodeRegistration(127.0.0.1:35229, datanodeUuid=bc8298c0-2d4e-4b98-bc4f-3518c111353c, infoPort=33743, infoSecurePort=0, ipcPort=36551, storageInfo=lv=-57;cid=testClusterID;nsid=1583252746;c=1689149924699), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:45,620 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 08:18:45,620 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 08:18:45,620 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 08:18:45,627 DEBUG [Listener at localhost/36551] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae 2023-07-12 08:18:45,633 INFO [Listener at localhost/36551] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/cluster_f95a6f5f-e6de-54b0-cd8f-3d48d26e596e/zookeeper_0, clientPort=63658, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/cluster_f95a6f5f-e6de-54b0-cd8f-3d48d26e596e/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/cluster_f95a6f5f-e6de-54b0-cd8f-3d48d26e596e/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 08:18:45,635 INFO [Listener at localhost/36551] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63658 2023-07-12 08:18:45,636 INFO [Listener at localhost/36551] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:45,637 INFO [Listener at localhost/36551] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:45,659 INFO [Listener at localhost/36551] util.FSUtils(471): Created version file at hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc with version=8 2023-07-12 08:18:45,660 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/hbase-staging 2023-07-12 08:18:45,661 DEBUG [Listener at localhost/36551] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 08:18:45,661 DEBUG [Listener at localhost/36551] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 08:18:45,661 DEBUG [Listener at localhost/36551] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 08:18:45,661 DEBUG [Listener at localhost/36551] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 08:18:45,662 INFO [Listener at localhost/36551] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:45,662 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,662 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,662 INFO [Listener at localhost/36551] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:45,662 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,662 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:45,662 INFO [Listener at localhost/36551] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:45,664 INFO [Listener at localhost/36551] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46573 2023-07-12 08:18:45,665 INFO [Listener at localhost/36551] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:45,666 INFO [Listener at localhost/36551] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:45,667 INFO [Listener at localhost/36551] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46573 connecting to ZooKeeper ensemble=127.0.0.1:63658 2023-07-12 08:18:45,675 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:465730x0, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:45,677 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46573-0x101589cec020000 connected 2023-07-12 08:18:45,697 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:45,697 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:45,698 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:45,703 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46573 2023-07-12 08:18:45,703 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46573 2023-07-12 08:18:45,704 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46573 2023-07-12 08:18:45,707 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46573 2023-07-12 08:18:45,708 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46573 2023-07-12 08:18:45,711 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:45,711 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:45,711 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:45,711 INFO [Listener at localhost/36551] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 08:18:45,711 INFO [Listener at localhost/36551] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:45,711 INFO [Listener at localhost/36551] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:45,712 INFO [Listener at localhost/36551] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:45,712 INFO [Listener at localhost/36551] http.HttpServer(1146): Jetty bound to port 41325 2023-07-12 08:18:45,712 INFO [Listener at localhost/36551] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:45,721 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,722 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6eca1326{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:45,722 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,722 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6618da3c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:45,731 INFO [Listener at localhost/36551] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:45,732 INFO [Listener at localhost/36551] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:45,732 INFO [Listener at localhost/36551] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:45,733 INFO [Listener at localhost/36551] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 08:18:45,734 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,735 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@49ce88f{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 08:18:45,736 INFO [Listener at localhost/36551] server.AbstractConnector(333): Started ServerConnector@262ac480{HTTP/1.1, (http/1.1)}{0.0.0.0:41325} 2023-07-12 08:18:45,736 INFO [Listener at localhost/36551] server.Server(415): Started @36293ms 2023-07-12 08:18:45,736 INFO [Listener at localhost/36551] master.HMaster(444): hbase.rootdir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc, hbase.cluster.distributed=false 2023-07-12 08:18:45,751 INFO [Listener at localhost/36551] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:45,751 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,751 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,751 INFO [Listener at localhost/36551] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:45,751 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,751 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:45,751 INFO [Listener at localhost/36551] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:45,753 INFO [Listener at localhost/36551] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46091 2023-07-12 08:18:45,753 INFO [Listener at localhost/36551] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 08:18:45,754 DEBUG [Listener at localhost/36551] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 08:18:45,754 INFO [Listener at localhost/36551] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:45,756 INFO [Listener at localhost/36551] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:45,757 INFO [Listener at localhost/36551] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46091 connecting to ZooKeeper ensemble=127.0.0.1:63658 2023-07-12 08:18:45,759 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:460910x0, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:45,760 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46091-0x101589cec020001 connected 2023-07-12 08:18:45,760 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:45,761 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:45,761 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:45,762 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46091 2023-07-12 08:18:45,762 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46091 2023-07-12 08:18:45,763 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46091 2023-07-12 08:18:45,764 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46091 2023-07-12 08:18:45,764 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46091 2023-07-12 08:18:45,766 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:45,766 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:45,766 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:45,767 INFO [Listener at localhost/36551] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 08:18:45,767 INFO [Listener at localhost/36551] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:45,767 INFO [Listener at localhost/36551] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:45,767 INFO [Listener at localhost/36551] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:45,768 INFO [Listener at localhost/36551] http.HttpServer(1146): Jetty bound to port 35275 2023-07-12 08:18:45,768 INFO [Listener at localhost/36551] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:45,769 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,770 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e0a0d01{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:45,770 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,770 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@150457ec{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:45,777 INFO [Listener at localhost/36551] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:45,778 INFO [Listener at localhost/36551] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:45,778 INFO [Listener at localhost/36551] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:45,779 INFO [Listener at localhost/36551] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 08:18:45,780 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,780 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3f6896e5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:45,781 INFO [Listener at localhost/36551] server.AbstractConnector(333): Started ServerConnector@37060062{HTTP/1.1, (http/1.1)}{0.0.0.0:35275} 2023-07-12 08:18:45,782 INFO [Listener at localhost/36551] server.Server(415): Started @36339ms 2023-07-12 08:18:45,794 INFO [Listener at localhost/36551] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:45,794 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,794 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,794 INFO [Listener at localhost/36551] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:45,794 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,794 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:45,794 INFO [Listener at localhost/36551] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:45,795 INFO [Listener at localhost/36551] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33385 2023-07-12 08:18:45,796 INFO [Listener at localhost/36551] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 08:18:45,797 DEBUG [Listener at localhost/36551] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 08:18:45,797 INFO [Listener at localhost/36551] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:45,798 INFO [Listener at localhost/36551] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:45,800 INFO [Listener at localhost/36551] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33385 connecting to ZooKeeper ensemble=127.0.0.1:63658 2023-07-12 08:18:45,803 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:333850x0, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:45,804 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33385-0x101589cec020002 connected 2023-07-12 08:18:45,804 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:45,805 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:45,805 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:45,806 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33385 2023-07-12 08:18:45,807 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33385 2023-07-12 08:18:45,809 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33385 2023-07-12 08:18:45,810 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33385 2023-07-12 08:18:45,814 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33385 2023-07-12 08:18:45,816 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:45,816 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:45,816 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:45,816 INFO [Listener at localhost/36551] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 08:18:45,816 INFO [Listener at localhost/36551] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:45,816 INFO [Listener at localhost/36551] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:45,817 INFO [Listener at localhost/36551] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:45,817 INFO [Listener at localhost/36551] http.HttpServer(1146): Jetty bound to port 45545 2023-07-12 08:18:45,817 INFO [Listener at localhost/36551] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:45,819 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,820 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6de35580{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:45,820 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,820 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6771d4ba{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:45,827 INFO [Listener at localhost/36551] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:45,828 INFO [Listener at localhost/36551] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:45,829 INFO [Listener at localhost/36551] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:45,829 INFO [Listener at localhost/36551] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 08:18:45,831 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,832 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2b5e8561{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:45,833 INFO [Listener at localhost/36551] server.AbstractConnector(333): Started ServerConnector@7aa1b707{HTTP/1.1, (http/1.1)}{0.0.0.0:45545} 2023-07-12 08:18:45,833 INFO [Listener at localhost/36551] server.Server(415): Started @36390ms 2023-07-12 08:18:45,845 INFO [Listener at localhost/36551] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:45,845 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,845 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,845 INFO [Listener at localhost/36551] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:45,846 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:45,846 INFO [Listener at localhost/36551] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:45,846 INFO [Listener at localhost/36551] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:45,847 INFO [Listener at localhost/36551] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39181 2023-07-12 08:18:45,848 INFO [Listener at localhost/36551] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 08:18:45,849 DEBUG [Listener at localhost/36551] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 08:18:45,849 INFO [Listener at localhost/36551] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:45,851 INFO [Listener at localhost/36551] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:45,852 INFO [Listener at localhost/36551] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39181 connecting to ZooKeeper ensemble=127.0.0.1:63658 2023-07-12 08:18:45,856 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:391810x0, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:45,857 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): regionserver:391810x0, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:45,857 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39181-0x101589cec020003 connected 2023-07-12 08:18:45,857 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:45,858 DEBUG [Listener at localhost/36551] zookeeper.ZKUtil(164): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:45,858 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39181 2023-07-12 08:18:45,858 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39181 2023-07-12 08:18:45,904 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39181 2023-07-12 08:18:45,907 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39181 2023-07-12 08:18:45,907 DEBUG [Listener at localhost/36551] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39181 2023-07-12 08:18:45,909 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:45,909 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:45,909 INFO [Listener at localhost/36551] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:45,910 INFO [Listener at localhost/36551] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 08:18:45,910 INFO [Listener at localhost/36551] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:45,910 INFO [Listener at localhost/36551] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:45,910 INFO [Listener at localhost/36551] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:45,911 INFO [Listener at localhost/36551] http.HttpServer(1146): Jetty bound to port 38851 2023-07-12 08:18:45,911 INFO [Listener at localhost/36551] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:45,913 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,914 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4b95847b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:45,915 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,915 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1d5d1fe6{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:45,922 INFO [Listener at localhost/36551] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:45,922 INFO [Listener at localhost/36551] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:45,922 INFO [Listener at localhost/36551] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:45,923 INFO [Listener at localhost/36551] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 08:18:45,923 INFO [Listener at localhost/36551] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:45,924 INFO [Listener at localhost/36551] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1cbd4662{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:45,925 INFO [Listener at localhost/36551] server.AbstractConnector(333): Started ServerConnector@2be675c5{HTTP/1.1, (http/1.1)}{0.0.0.0:38851} 2023-07-12 08:18:45,925 INFO [Listener at localhost/36551] server.Server(415): Started @36482ms 2023-07-12 08:18:45,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:45,934 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@54dab6ef{HTTP/1.1, (http/1.1)}{0.0.0.0:43967} 2023-07-12 08:18:45,934 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @36491ms 2023-07-12 08:18:45,934 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,46573,1689149925661 2023-07-12 08:18:45,936 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 08:18:45,936 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,46573,1689149925661 2023-07-12 08:18:45,938 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:45,938 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:45,938 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:45,938 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:45,938 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:45,940 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 08:18:45,942 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,46573,1689149925661 from backup master directory 2023-07-12 08:18:45,942 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 08:18:45,943 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,46573,1689149925661 2023-07-12 08:18:45,943 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 08:18:45,943 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:45,943 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,46573,1689149925661 2023-07-12 08:18:45,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/hbase.id with ID: 287043c9-8880-4667-a078-a17c58864760 2023-07-12 08:18:45,977 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:45,980 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:45,998 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7e9ee6b5 to 127.0.0.1:63658 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:46,002 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6084eed7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:46,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:46,003 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 08:18:46,006 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:46,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/data/master/store-tmp 2023-07-12 08:18:46,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:46,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 08:18:46,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:46,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:46,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 08:18:46,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:46,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:46,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 08:18:46,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/WALs/jenkins-hbase4.apache.org,46573,1689149925661 2023-07-12 08:18:46,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46573%2C1689149925661, suffix=, logDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/WALs/jenkins-hbase4.apache.org,46573,1689149925661, archiveDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/oldWALs, maxLogs=10 2023-07-12 08:18:46,038 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35229,DS-01a317f0-d8f0-4962-9a54-f71df947c3b2,DISK] 2023-07-12 08:18:46,039 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41229,DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9,DISK] 2023-07-12 08:18:46,038 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45421,DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00,DISK] 2023-07-12 08:18:46,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/WALs/jenkins-hbase4.apache.org,46573,1689149925661/jenkins-hbase4.apache.org%2C46573%2C1689149925661.1689149926022 2023-07-12 08:18:46,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35229,DS-01a317f0-d8f0-4962-9a54-f71df947c3b2,DISK], DatanodeInfoWithStorage[127.0.0.1:45421,DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00,DISK], DatanodeInfoWithStorage[127.0.0.1:41229,DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9,DISK]] 2023-07-12 08:18:46,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:46,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:46,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:46,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:46,045 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:46,047 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 08:18:46,047 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 08:18:46,048 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:46,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:46,049 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:46,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:46,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:46,054 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11074580640, jitterRate=0.03140069544315338}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:46,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 08:18:46,054 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 08:18:46,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 08:18:46,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 08:18:46,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 08:18:46,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 08:18:46,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 08:18:46,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 08:18:46,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 08:18:46,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 08:18:46,064 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 08:18:46,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 08:18:46,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 08:18:46,067 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:46,067 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 08:18:46,067 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 08:18:46,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 08:18:46,069 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:46,069 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:46,069 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:46,069 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:46,070 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:46,070 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,46573,1689149925661, sessionid=0x101589cec020000, setting cluster-up flag (Was=false) 2023-07-12 08:18:46,076 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:46,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 08:18:46,081 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46573,1689149925661 2023-07-12 08:18:46,083 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:46,090 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 08:18:46,091 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46573,1689149925661 2023-07-12 08:18:46,092 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.hbase-snapshot/.tmp 2023-07-12 08:18:46,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 08:18:46,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 08:18:46,094 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 08:18:46,094 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 08:18:46,095 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:46,095 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-12 08:18:46,096 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 08:18:46,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 08:18:46,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 08:18:46,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 08:18:46,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 08:18:46,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:46,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:46,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:46,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:46,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-12 08:18:46,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:46,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689149956110 2023-07-12 08:18:46,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 08:18:46,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 08:18:46,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 08:18:46,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 08:18:46,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 08:18:46,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 08:18:46,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,110 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 08:18:46,110 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 08:18:46,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 08:18:46,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 08:18:46,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 08:18:46,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 08:18:46,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 08:18:46,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689149926112,5,FailOnTimeoutGroup] 2023-07-12 08:18:46,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689149926112,5,FailOnTimeoutGroup] 2023-07-12 08:18:46,112 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,112 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 08:18:46,112 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,112 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,113 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:46,128 INFO [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(951): ClusterId : 287043c9-8880-4667-a078-a17c58864760 2023-07-12 08:18:46,131 DEBUG [RS:1;jenkins-hbase4:33385] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 08:18:46,131 INFO [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(951): ClusterId : 287043c9-8880-4667-a078-a17c58864760 2023-07-12 08:18:46,131 DEBUG [RS:0;jenkins-hbase4:46091] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 08:18:46,132 INFO [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(951): ClusterId : 287043c9-8880-4667-a078-a17c58864760 2023-07-12 08:18:46,132 DEBUG [RS:2;jenkins-hbase4:39181] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 08:18:46,135 DEBUG [RS:1;jenkins-hbase4:33385] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 08:18:46,135 DEBUG [RS:1;jenkins-hbase4:33385] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 08:18:46,135 DEBUG [RS:0;jenkins-hbase4:46091] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 08:18:46,135 DEBUG [RS:0;jenkins-hbase4:46091] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 08:18:46,136 DEBUG [RS:2;jenkins-hbase4:39181] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 08:18:46,136 DEBUG [RS:2;jenkins-hbase4:39181] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 08:18:46,138 DEBUG [RS:1;jenkins-hbase4:33385] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 08:18:46,141 DEBUG [RS:1;jenkins-hbase4:33385] zookeeper.ReadOnlyZKClient(139): Connect 0x6cf9f1fd to 127.0.0.1:63658 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:46,141 DEBUG [RS:2;jenkins-hbase4:39181] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 08:18:46,142 DEBUG [RS:0;jenkins-hbase4:46091] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 08:18:46,142 DEBUG [RS:2;jenkins-hbase4:39181] zookeeper.ReadOnlyZKClient(139): Connect 0x40ec0d9e to 127.0.0.1:63658 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:46,147 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:46,147 DEBUG [RS:0;jenkins-hbase4:46091] zookeeper.ReadOnlyZKClient(139): Connect 0x62fbe037 to 127.0.0.1:63658 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:46,150 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:46,151 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc 2023-07-12 08:18:46,159 DEBUG [RS:1;jenkins-hbase4:33385] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@623e0212, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:46,159 DEBUG [RS:1;jenkins-hbase4:33385] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66019a8d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:46,159 DEBUG [RS:0;jenkins-hbase4:46091] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68038b47, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:46,159 DEBUG [RS:2;jenkins-hbase4:39181] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a1466ee, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:46,160 DEBUG [RS:0;jenkins-hbase4:46091] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2d011823, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:46,160 DEBUG [RS:2;jenkins-hbase4:39181] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@f16e1c8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:46,170 DEBUG [RS:1;jenkins-hbase4:33385] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:33385 2023-07-12 08:18:46,170 INFO [RS:1;jenkins-hbase4:33385] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 08:18:46,170 DEBUG [RS:0;jenkins-hbase4:46091] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46091 2023-07-12 08:18:46,170 INFO [RS:1;jenkins-hbase4:33385] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 08:18:46,170 INFO [RS:0;jenkins-hbase4:46091] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 08:18:46,170 INFO [RS:0;jenkins-hbase4:46091] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 08:18:46,170 DEBUG [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 08:18:46,170 DEBUG [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 08:18:46,171 INFO [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46573,1689149925661 with isa=jenkins-hbase4.apache.org/172.31.14.131:46091, startcode=1689149925750 2023-07-12 08:18:46,171 INFO [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46573,1689149925661 with isa=jenkins-hbase4.apache.org/172.31.14.131:33385, startcode=1689149925793 2023-07-12 08:18:46,171 DEBUG [RS:0;jenkins-hbase4:46091] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 08:18:46,171 DEBUG [RS:1;jenkins-hbase4:33385] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 08:18:46,172 DEBUG [RS:2;jenkins-hbase4:39181] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:39181 2023-07-12 08:18:46,172 INFO [RS:2;jenkins-hbase4:39181] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 08:18:46,172 INFO [RS:2;jenkins-hbase4:39181] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 08:18:46,172 DEBUG [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 08:18:46,173 INFO [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46573,1689149925661 with isa=jenkins-hbase4.apache.org/172.31.14.131:39181, startcode=1689149925844 2023-07-12 08:18:46,173 DEBUG [RS:2;jenkins-hbase4:39181] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 08:18:46,179 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46685, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 08:18:46,179 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45843, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 08:18:46,181 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46573] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:46,181 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:46,182 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 08:18:46,184 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46573] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:46,184 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:46,184 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 08:18:46,184 DEBUG [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc 2023-07-12 08:18:46,184 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52571, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 08:18:46,184 DEBUG [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc 2023-07-12 08:18:46,184 DEBUG [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41039 2023-07-12 08:18:46,184 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46573] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:46,184 DEBUG [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41039 2023-07-12 08:18:46,184 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:46,184 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 08:18:46,184 DEBUG [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41325 2023-07-12 08:18:46,184 DEBUG [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41325 2023-07-12 08:18:46,185 DEBUG [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc 2023-07-12 08:18:46,185 DEBUG [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41039 2023-07-12 08:18:46,185 DEBUG [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41325 2023-07-12 08:18:46,186 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:46,191 DEBUG [RS:2;jenkins-hbase4:39181] zookeeper.ZKUtil(162): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:46,191 WARN [RS:2;jenkins-hbase4:39181] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:46,191 INFO [RS:2;jenkins-hbase4:39181] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:46,191 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39181,1689149925844] 2023-07-12 08:18:46,191 DEBUG [RS:1;jenkins-hbase4:33385] zookeeper.ZKUtil(162): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:46,191 DEBUG [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/WALs/jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:46,191 WARN [RS:1;jenkins-hbase4:33385] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:46,191 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33385,1689149925793] 2023-07-12 08:18:46,191 INFO [RS:1;jenkins-hbase4:33385] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:46,192 DEBUG [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/WALs/jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:46,191 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46091,1689149925750] 2023-07-12 08:18:46,192 DEBUG [RS:0;jenkins-hbase4:46091] zookeeper.ZKUtil(162): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:46,192 WARN [RS:0;jenkins-hbase4:46091] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:46,192 INFO [RS:0;jenkins-hbase4:46091] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:46,192 DEBUG [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/WALs/jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:46,196 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:46,201 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 08:18:46,204 DEBUG [RS:2;jenkins-hbase4:39181] zookeeper.ZKUtil(162): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:46,205 DEBUG [RS:0;jenkins-hbase4:46091] zookeeper.ZKUtil(162): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:46,205 DEBUG [RS:2;jenkins-hbase4:39181] zookeeper.ZKUtil(162): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:46,205 DEBUG [RS:0;jenkins-hbase4:46091] zookeeper.ZKUtil(162): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:46,205 DEBUG [RS:1;jenkins-hbase4:33385] zookeeper.ZKUtil(162): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:46,205 DEBUG [RS:0;jenkins-hbase4:46091] zookeeper.ZKUtil(162): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:46,206 DEBUG [RS:2;jenkins-hbase4:39181] zookeeper.ZKUtil(162): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:46,206 DEBUG [RS:1;jenkins-hbase4:33385] zookeeper.ZKUtil(162): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:46,206 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/info 2023-07-12 08:18:46,207 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 08:18:46,207 DEBUG [RS:1;jenkins-hbase4:33385] zookeeper.ZKUtil(162): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:46,207 DEBUG [RS:0;jenkins-hbase4:46091] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 08:18:46,207 DEBUG [RS:2;jenkins-hbase4:39181] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 08:18:46,208 INFO [RS:0;jenkins-hbase4:46091] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 08:18:46,208 INFO [RS:2;jenkins-hbase4:39181] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 08:18:46,208 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:46,208 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 08:18:46,209 INFO [RS:0;jenkins-hbase4:46091] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 08:18:46,209 DEBUG [RS:1;jenkins-hbase4:33385] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 08:18:46,209 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/rep_barrier 2023-07-12 08:18:46,210 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 08:18:46,210 INFO [RS:1;jenkins-hbase4:33385] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 08:18:46,210 INFO [RS:0;jenkins-hbase4:46091] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 08:18:46,210 INFO [RS:2;jenkins-hbase4:39181] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 08:18:46,210 INFO [RS:0;jenkins-hbase4:46091] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,210 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:46,210 INFO [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 08:18:46,210 INFO [RS:2;jenkins-hbase4:39181] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 08:18:46,211 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 08:18:46,211 INFO [RS:2;jenkins-hbase4:39181] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,212 INFO [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 08:18:46,212 INFO [RS:1;jenkins-hbase4:33385] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 08:18:46,213 INFO [RS:0;jenkins-hbase4:46091] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,213 INFO [RS:1;jenkins-hbase4:33385] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 08:18:46,213 INFO [RS:1;jenkins-hbase4:33385] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,213 DEBUG [RS:0;jenkins-hbase4:46091] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,213 DEBUG [RS:0;jenkins-hbase4:46091] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,213 INFO [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 08:18:46,213 INFO [RS:2;jenkins-hbase4:39181] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,213 DEBUG [RS:0;jenkins-hbase4:46091] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:2;jenkins-hbase4:39181] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:0;jenkins-hbase4:46091] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:2;jenkins-hbase4:39181] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:0;jenkins-hbase4:46091] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:2;jenkins-hbase4:39181] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:0;jenkins-hbase4:46091] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:46,213 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/table 2023-07-12 08:18:46,214 DEBUG [RS:0;jenkins-hbase4:46091] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:2;jenkins-hbase4:39181] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:0;jenkins-hbase4:46091] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:2;jenkins-hbase4:39181] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:0;jenkins-hbase4:46091] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:2;jenkins-hbase4:39181] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:46,214 INFO [RS:1;jenkins-hbase4:33385] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,214 DEBUG [RS:2;jenkins-hbase4:39181] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 08:18:46,214 DEBUG [RS:0;jenkins-hbase4:46091] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:2;jenkins-hbase4:39181] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,214 DEBUG [RS:1;jenkins-hbase4:33385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,215 DEBUG [RS:2;jenkins-hbase4:39181] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,215 DEBUG [RS:1;jenkins-hbase4:33385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,215 DEBUG [RS:2;jenkins-hbase4:39181] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,215 DEBUG [RS:1;jenkins-hbase4:33385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,215 DEBUG [RS:1;jenkins-hbase4:33385] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,215 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:46,215 DEBUG [RS:1;jenkins-hbase4:33385] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,216 DEBUG [RS:1;jenkins-hbase4:33385] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:46,216 DEBUG [RS:1;jenkins-hbase4:33385] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,216 DEBUG [RS:1;jenkins-hbase4:33385] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,216 DEBUG [RS:1;jenkins-hbase4:33385] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,216 DEBUG [RS:1;jenkins-hbase4:33385] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:46,219 INFO [RS:0;jenkins-hbase4:46091] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,219 INFO [RS:0;jenkins-hbase4:46091] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,219 INFO [RS:0;jenkins-hbase4:46091] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,219 INFO [RS:0;jenkins-hbase4:46091] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,220 INFO [RS:1;jenkins-hbase4:33385] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,220 INFO [RS:1;jenkins-hbase4:33385] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,221 INFO [RS:2;jenkins-hbase4:39181] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,221 INFO [RS:1;jenkins-hbase4:33385] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,221 INFO [RS:2;jenkins-hbase4:39181] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,221 INFO [RS:1;jenkins-hbase4:33385] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,221 INFO [RS:2;jenkins-hbase4:39181] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,221 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740 2023-07-12 08:18:46,221 INFO [RS:2;jenkins-hbase4:39181] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,222 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740 2023-07-12 08:18:46,224 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 08:18:46,225 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 08:18:46,227 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:46,228 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9579185920, jitterRate=-0.10786879062652588}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 08:18:46,228 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 08:18:46,228 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 08:18:46,228 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 08:18:46,228 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 08:18:46,228 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 08:18:46,228 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 08:18:46,228 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 08:18:46,228 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 08:18:46,229 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 08:18:46,229 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 08:18:46,229 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 08:18:46,230 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 08:18:46,231 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 08:18:46,235 INFO [RS:1;jenkins-hbase4:33385] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 08:18:46,235 INFO [RS:1;jenkins-hbase4:33385] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33385,1689149925793-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,237 INFO [RS:2;jenkins-hbase4:39181] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 08:18:46,237 INFO [RS:0;jenkins-hbase4:46091] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 08:18:46,237 INFO [RS:2;jenkins-hbase4:39181] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39181,1689149925844-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,237 INFO [RS:0;jenkins-hbase4:46091] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46091,1689149925750-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,247 INFO [RS:1;jenkins-hbase4:33385] regionserver.Replication(203): jenkins-hbase4.apache.org,33385,1689149925793 started 2023-07-12 08:18:46,247 INFO [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33385,1689149925793, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33385, sessionid=0x101589cec020002 2023-07-12 08:18:46,247 DEBUG [RS:1;jenkins-hbase4:33385] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 08:18:46,247 DEBUG [RS:1;jenkins-hbase4:33385] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:46,247 DEBUG [RS:1;jenkins-hbase4:33385] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33385,1689149925793' 2023-07-12 08:18:46,247 DEBUG [RS:1;jenkins-hbase4:33385] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 08:18:46,247 DEBUG [RS:1;jenkins-hbase4:33385] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 08:18:46,247 INFO [RS:2;jenkins-hbase4:39181] regionserver.Replication(203): jenkins-hbase4.apache.org,39181,1689149925844 started 2023-07-12 08:18:46,247 INFO [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39181,1689149925844, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39181, sessionid=0x101589cec020003 2023-07-12 08:18:46,247 DEBUG [RS:2;jenkins-hbase4:39181] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 08:18:46,248 DEBUG [RS:2;jenkins-hbase4:39181] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:46,248 DEBUG [RS:2;jenkins-hbase4:39181] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39181,1689149925844' 2023-07-12 08:18:46,248 DEBUG [RS:2;jenkins-hbase4:39181] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 08:18:46,248 DEBUG [RS:1;jenkins-hbase4:33385] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 08:18:46,248 DEBUG [RS:1;jenkins-hbase4:33385] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 08:18:46,248 DEBUG [RS:1;jenkins-hbase4:33385] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:46,248 DEBUG [RS:1;jenkins-hbase4:33385] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33385,1689149925793' 2023-07-12 08:18:46,248 DEBUG [RS:1;jenkins-hbase4:33385] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 08:18:46,248 INFO [RS:0;jenkins-hbase4:46091] regionserver.Replication(203): jenkins-hbase4.apache.org,46091,1689149925750 started 2023-07-12 08:18:46,248 INFO [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46091,1689149925750, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46091, sessionid=0x101589cec020001 2023-07-12 08:18:46,248 DEBUG [RS:2;jenkins-hbase4:39181] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 08:18:46,248 DEBUG [RS:0;jenkins-hbase4:46091] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 08:18:46,248 DEBUG [RS:0;jenkins-hbase4:46091] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:46,248 DEBUG [RS:0;jenkins-hbase4:46091] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46091,1689149925750' 2023-07-12 08:18:46,248 DEBUG [RS:0;jenkins-hbase4:46091] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 08:18:46,248 DEBUG [RS:1;jenkins-hbase4:33385] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 08:18:46,248 DEBUG [RS:2;jenkins-hbase4:39181] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 08:18:46,248 DEBUG [RS:2;jenkins-hbase4:39181] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 08:18:46,248 DEBUG [RS:2;jenkins-hbase4:39181] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:46,248 DEBUG [RS:2;jenkins-hbase4:39181] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39181,1689149925844' 2023-07-12 08:18:46,248 DEBUG [RS:2;jenkins-hbase4:39181] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 08:18:46,248 DEBUG [RS:0;jenkins-hbase4:46091] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 08:18:46,248 DEBUG [RS:1;jenkins-hbase4:33385] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 08:18:46,249 INFO [RS:1;jenkins-hbase4:33385] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 08:18:46,249 DEBUG [RS:2;jenkins-hbase4:39181] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 08:18:46,249 DEBUG [RS:0;jenkins-hbase4:46091] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 08:18:46,249 DEBUG [RS:0;jenkins-hbase4:46091] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 08:18:46,249 DEBUG [RS:0;jenkins-hbase4:46091] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:46,249 DEBUG [RS:0;jenkins-hbase4:46091] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46091,1689149925750' 2023-07-12 08:18:46,249 DEBUG [RS:0;jenkins-hbase4:46091] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 08:18:46,249 DEBUG [RS:2;jenkins-hbase4:39181] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 08:18:46,249 INFO [RS:2;jenkins-hbase4:39181] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 08:18:46,249 DEBUG [RS:0;jenkins-hbase4:46091] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 08:18:46,249 DEBUG [RS:0;jenkins-hbase4:46091] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 08:18:46,249 INFO [RS:0;jenkins-hbase4:46091] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 08:18:46,251 INFO [RS:0;jenkins-hbase4:46091] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,251 INFO [RS:1;jenkins-hbase4:33385] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,251 INFO [RS:2;jenkins-hbase4:39181] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,251 DEBUG [RS:0;jenkins-hbase4:46091] zookeeper.ZKUtil(398): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 08:18:46,251 DEBUG [RS:1;jenkins-hbase4:33385] zookeeper.ZKUtil(398): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 08:18:46,251 INFO [RS:1;jenkins-hbase4:33385] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 08:18:46,251 DEBUG [RS:2;jenkins-hbase4:39181] zookeeper.ZKUtil(398): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 08:18:46,251 INFO [RS:0;jenkins-hbase4:46091] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 08:18:46,251 INFO [RS:2;jenkins-hbase4:39181] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 08:18:46,252 INFO [RS:1;jenkins-hbase4:33385] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,252 INFO [RS:0;jenkins-hbase4:46091] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,252 INFO [RS:2;jenkins-hbase4:39181] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,252 INFO [RS:1;jenkins-hbase4:33385] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,252 INFO [RS:2;jenkins-hbase4:39181] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,252 INFO [RS:0;jenkins-hbase4:46091] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,356 INFO [RS:0;jenkins-hbase4:46091] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46091%2C1689149925750, suffix=, logDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/WALs/jenkins-hbase4.apache.org,46091,1689149925750, archiveDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/oldWALs, maxLogs=32 2023-07-12 08:18:46,356 INFO [RS:1;jenkins-hbase4:33385] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33385%2C1689149925793, suffix=, logDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/WALs/jenkins-hbase4.apache.org,33385,1689149925793, archiveDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/oldWALs, maxLogs=32 2023-07-12 08:18:46,356 INFO [RS:2;jenkins-hbase4:39181] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39181%2C1689149925844, suffix=, logDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/WALs/jenkins-hbase4.apache.org,39181,1689149925844, archiveDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/oldWALs, maxLogs=32 2023-07-12 08:18:46,382 DEBUG [jenkins-hbase4:46573] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 08:18:46,382 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41229,DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9,DISK] 2023-07-12 08:18:46,382 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35229,DS-01a317f0-d8f0-4962-9a54-f71df947c3b2,DISK] 2023-07-12 08:18:46,382 DEBUG [jenkins-hbase4:46573] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:46,382 DEBUG [jenkins-hbase4:46573] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:46,382 DEBUG [jenkins-hbase4:46573] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:46,382 DEBUG [jenkins-hbase4:46573] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:46,382 DEBUG [jenkins-hbase4:46573] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:46,384 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46091,1689149925750, state=OPENING 2023-07-12 08:18:46,385 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45421,DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00,DISK] 2023-07-12 08:18:46,387 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 08:18:46,389 INFO [RS:0;jenkins-hbase4:46091] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/WALs/jenkins-hbase4.apache.org,46091,1689149925750/jenkins-hbase4.apache.org%2C46091%2C1689149925750.1689149926359 2023-07-12 08:18:46,389 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:46,390 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46091,1689149925750}] 2023-07-12 08:18:46,390 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 08:18:46,391 DEBUG [RS:0;jenkins-hbase4:46091] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41229,DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9,DISK], DatanodeInfoWithStorage[127.0.0.1:35229,DS-01a317f0-d8f0-4962-9a54-f71df947c3b2,DISK], DatanodeInfoWithStorage[127.0.0.1:45421,DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00,DISK]] 2023-07-12 08:18:46,395 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35229,DS-01a317f0-d8f0-4962-9a54-f71df947c3b2,DISK] 2023-07-12 08:18:46,396 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45421,DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00,DISK] 2023-07-12 08:18:46,395 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41229,DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9,DISK] 2023-07-12 08:18:46,397 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45421,DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00,DISK] 2023-07-12 08:18:46,397 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35229,DS-01a317f0-d8f0-4962-9a54-f71df947c3b2,DISK] 2023-07-12 08:18:46,397 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41229,DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9,DISK] 2023-07-12 08:18:46,402 WARN [ReadOnlyZKClient-127.0.0.1:63658@0x7e9ee6b5] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 08:18:46,403 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46573,1689149925661] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:46,407 INFO [RS:1;jenkins-hbase4:33385] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/WALs/jenkins-hbase4.apache.org,33385,1689149925793/jenkins-hbase4.apache.org%2C33385%2C1689149925793.1689149926366 2023-07-12 08:18:46,407 INFO [RS:2;jenkins-hbase4:39181] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/WALs/jenkins-hbase4.apache.org,39181,1689149925844/jenkins-hbase4.apache.org%2C39181%2C1689149925844.1689149926366 2023-07-12 08:18:46,407 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37130, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:46,408 DEBUG [RS:1;jenkins-hbase4:33385] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45421,DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00,DISK], DatanodeInfoWithStorage[127.0.0.1:35229,DS-01a317f0-d8f0-4962-9a54-f71df947c3b2,DISK], DatanodeInfoWithStorage[127.0.0.1:41229,DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9,DISK]] 2023-07-12 08:18:46,408 DEBUG [RS:2;jenkins-hbase4:39181] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45421,DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00,DISK], DatanodeInfoWithStorage[127.0.0.1:41229,DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9,DISK], DatanodeInfoWithStorage[127.0.0.1:35229,DS-01a317f0-d8f0-4962-9a54-f71df947c3b2,DISK]] 2023-07-12 08:18:46,408 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46091] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:37130 deadline: 1689149986408, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:46,550 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:46,552 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:46,555 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37132, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:46,560 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 08:18:46,560 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:46,561 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46091%2C1689149925750.meta, suffix=.meta, logDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/WALs/jenkins-hbase4.apache.org,46091,1689149925750, archiveDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/oldWALs, maxLogs=32 2023-07-12 08:18:46,564 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-12 08:18:46,585 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45421,DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00,DISK] 2023-07-12 08:18:46,586 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35229,DS-01a317f0-d8f0-4962-9a54-f71df947c3b2,DISK] 2023-07-12 08:18:46,585 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41229,DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9,DISK] 2023-07-12 08:18:46,595 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/WALs/jenkins-hbase4.apache.org,46091,1689149925750/jenkins-hbase4.apache.org%2C46091%2C1689149925750.meta.1689149926562.meta 2023-07-12 08:18:46,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45421,DS-e38dba0c-dcb5-40b9-942c-5d054bc94e00,DISK], DatanodeInfoWithStorage[127.0.0.1:41229,DS-16bd8f60-43e2-4eec-9e65-b3d0d96928e9,DISK], DatanodeInfoWithStorage[127.0.0.1:35229,DS-01a317f0-d8f0-4962-9a54-f71df947c3b2,DISK]] 2023-07-12 08:18:46,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:46,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 08:18:46,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 08:18:46,599 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 08:18:46,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 08:18:46,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:46,600 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 08:18:46,600 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 08:18:46,601 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 08:18:46,603 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/info 2023-07-12 08:18:46,604 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/info 2023-07-12 08:18:46,604 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 08:18:46,605 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:46,605 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 08:18:46,606 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/rep_barrier 2023-07-12 08:18:46,606 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/rep_barrier 2023-07-12 08:18:46,607 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 08:18:46,608 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:46,608 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 08:18:46,609 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/table 2023-07-12 08:18:46,609 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/table 2023-07-12 08:18:46,611 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 08:18:46,611 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:46,612 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740 2023-07-12 08:18:46,613 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740 2023-07-12 08:18:46,616 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 08:18:46,618 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 08:18:46,619 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11142144640, jitterRate=0.0376930832862854}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 08:18:46,619 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 08:18:46,619 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689149926550 2023-07-12 08:18:46,626 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 08:18:46,626 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 08:18:46,627 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46091,1689149925750, state=OPEN 2023-07-12 08:18:46,628 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 08:18:46,628 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 08:18:46,630 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 08:18:46,630 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46091,1689149925750 in 238 msec 2023-07-12 08:18:46,631 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 08:18:46,631 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 401 msec 2023-07-12 08:18:46,633 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 536 msec 2023-07-12 08:18:46,633 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689149926633, completionTime=-1 2023-07-12 08:18:46,633 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 08:18:46,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 08:18:46,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 08:18:46,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689149986638 2023-07-12 08:18:46,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689150046638 2023-07-12 08:18:46,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-12 08:18:46,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46573,1689149925661-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46573,1689149925661-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46573,1689149925661-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:46573, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:46,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 08:18:46,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:46,646 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 08:18:46,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 08:18:46,648 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:46,649 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:46,651 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703 2023-07-12 08:18:46,651 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703 empty. 2023-07-12 08:18:46,652 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703 2023-07-12 08:18:46,652 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 08:18:46,666 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:46,668 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => cafa241b8342fa7f378b4b53a44ba703, NAME => 'hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp 2023-07-12 08:18:46,691 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:46,691 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing cafa241b8342fa7f378b4b53a44ba703, disabling compactions & flushes 2023-07-12 08:18:46,692 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:46,692 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:46,692 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. after waiting 0 ms 2023-07-12 08:18:46,692 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:46,692 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:46,692 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for cafa241b8342fa7f378b4b53a44ba703: 2023-07-12 08:18:46,694 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:46,695 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149926695"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149926695"}]},"ts":"1689149926695"} 2023-07-12 08:18:46,697 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:46,698 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:46,698 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149926698"}]},"ts":"1689149926698"} 2023-07-12 08:18:46,699 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 08:18:46,702 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:46,702 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:46,702 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:46,702 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:46,702 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:46,702 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cafa241b8342fa7f378b4b53a44ba703, ASSIGN}] 2023-07-12 08:18:46,704 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cafa241b8342fa7f378b4b53a44ba703, ASSIGN 2023-07-12 08:18:46,705 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=cafa241b8342fa7f378b4b53a44ba703, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39181,1689149925844; forceNewPlan=false, retain=false 2023-07-12 08:18:46,712 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46573,1689149925661] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:46,713 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46573,1689149925661] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 08:18:46,715 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:46,715 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:46,717 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b 2023-07-12 08:18:46,717 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b empty. 2023-07-12 08:18:46,718 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b 2023-07-12 08:18:46,718 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 08:18:46,728 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:46,729 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => c1941b6a001319dad431d041648c042b, NAME => 'hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp 2023-07-12 08:18:46,738 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:46,738 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing c1941b6a001319dad431d041648c042b, disabling compactions & flushes 2023-07-12 08:18:46,738 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:46,738 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:46,738 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. after waiting 0 ms 2023-07-12 08:18:46,738 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:46,738 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:46,738 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for c1941b6a001319dad431d041648c042b: 2023-07-12 08:18:46,740 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:46,741 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149926741"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149926741"}]},"ts":"1689149926741"} 2023-07-12 08:18:46,742 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:46,743 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:46,743 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149926743"}]},"ts":"1689149926743"} 2023-07-12 08:18:46,744 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 08:18:46,748 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:46,748 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:46,748 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:46,748 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:46,748 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:46,748 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c1941b6a001319dad431d041648c042b, ASSIGN}] 2023-07-12 08:18:46,749 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c1941b6a001319dad431d041648c042b, ASSIGN 2023-07-12 08:18:46,750 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=c1941b6a001319dad431d041648c042b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33385,1689149925793; forceNewPlan=false, retain=false 2023-07-12 08:18:46,750 INFO [jenkins-hbase4:46573] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 08:18:46,752 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cafa241b8342fa7f378b4b53a44ba703, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:46,752 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149926752"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149926752"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149926752"}]},"ts":"1689149926752"} 2023-07-12 08:18:46,752 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=c1941b6a001319dad431d041648c042b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:46,752 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149926752"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149926752"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149926752"}]},"ts":"1689149926752"} 2023-07-12 08:18:46,753 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure cafa241b8342fa7f378b4b53a44ba703, server=jenkins-hbase4.apache.org,39181,1689149925844}] 2023-07-12 08:18:46,753 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure c1941b6a001319dad431d041648c042b, server=jenkins-hbase4.apache.org,33385,1689149925793}] 2023-07-12 08:18:46,906 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:46,906 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:46,906 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:46,907 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:46,909 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54276, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:46,910 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44754, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:46,916 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:46,916 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:46,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cafa241b8342fa7f378b4b53a44ba703, NAME => 'hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:46,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c1941b6a001319dad431d041648c042b, NAME => 'hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:46,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace cafa241b8342fa7f378b4b53a44ba703 2023-07-12 08:18:46,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:46,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 08:18:46,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cafa241b8342fa7f378b4b53a44ba703 2023-07-12 08:18:46,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cafa241b8342fa7f378b4b53a44ba703 2023-07-12 08:18:46,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. service=MultiRowMutationService 2023-07-12 08:18:46,916 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 08:18:46,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup c1941b6a001319dad431d041648c042b 2023-07-12 08:18:46,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:46,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c1941b6a001319dad431d041648c042b 2023-07-12 08:18:46,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c1941b6a001319dad431d041648c042b 2023-07-12 08:18:46,918 INFO [StoreOpener-cafa241b8342fa7f378b4b53a44ba703-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cafa241b8342fa7f378b4b53a44ba703 2023-07-12 08:18:46,918 INFO [StoreOpener-c1941b6a001319dad431d041648c042b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region c1941b6a001319dad431d041648c042b 2023-07-12 08:18:46,919 DEBUG [StoreOpener-cafa241b8342fa7f378b4b53a44ba703-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703/info 2023-07-12 08:18:46,919 DEBUG [StoreOpener-cafa241b8342fa7f378b4b53a44ba703-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703/info 2023-07-12 08:18:46,919 INFO [StoreOpener-cafa241b8342fa7f378b4b53a44ba703-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cafa241b8342fa7f378b4b53a44ba703 columnFamilyName info 2023-07-12 08:18:46,919 DEBUG [StoreOpener-c1941b6a001319dad431d041648c042b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b/m 2023-07-12 08:18:46,919 DEBUG [StoreOpener-c1941b6a001319dad431d041648c042b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b/m 2023-07-12 08:18:46,920 INFO [StoreOpener-cafa241b8342fa7f378b4b53a44ba703-1] regionserver.HStore(310): Store=cafa241b8342fa7f378b4b53a44ba703/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:46,920 INFO [StoreOpener-c1941b6a001319dad431d041648c042b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c1941b6a001319dad431d041648c042b columnFamilyName m 2023-07-12 08:18:46,920 INFO [StoreOpener-c1941b6a001319dad431d041648c042b-1] regionserver.HStore(310): Store=c1941b6a001319dad431d041648c042b/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:46,921 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703 2023-07-12 08:18:46,921 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703 2023-07-12 08:18:46,921 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b 2023-07-12 08:18:46,922 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b 2023-07-12 08:18:46,924 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cafa241b8342fa7f378b4b53a44ba703 2023-07-12 08:18:46,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c1941b6a001319dad431d041648c042b 2023-07-12 08:18:46,927 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:46,928 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cafa241b8342fa7f378b4b53a44ba703; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11931080000, jitterRate=0.11116841435432434}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:46,928 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cafa241b8342fa7f378b4b53a44ba703: 2023-07-12 08:18:46,929 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:46,929 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c1941b6a001319dad431d041648c042b; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@10c5a88d, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:46,929 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c1941b6a001319dad431d041648c042b: 2023-07-12 08:18:46,929 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703., pid=8, masterSystemTime=1689149926906 2023-07-12 08:18:46,932 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b., pid=9, masterSystemTime=1689149926906 2023-07-12 08:18:46,934 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:46,935 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:46,935 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cafa241b8342fa7f378b4b53a44ba703, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:46,935 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149926935"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149926935"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149926935"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149926935"}]},"ts":"1689149926935"} 2023-07-12 08:18:46,936 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:46,936 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:46,936 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=c1941b6a001319dad431d041648c042b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:46,937 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149926936"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149926936"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149926936"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149926936"}]},"ts":"1689149926936"} 2023-07-12 08:18:46,939 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-12 08:18:46,939 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure cafa241b8342fa7f378b4b53a44ba703, server=jenkins-hbase4.apache.org,39181,1689149925844 in 184 msec 2023-07-12 08:18:46,940 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-12 08:18:46,940 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=cafa241b8342fa7f378b4b53a44ba703, ASSIGN in 237 msec 2023-07-12 08:18:46,941 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:46,941 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149926941"}]},"ts":"1689149926941"} 2023-07-12 08:18:46,944 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 08:18:46,944 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 08:18:46,944 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure c1941b6a001319dad431d041648c042b, server=jenkins-hbase4.apache.org,33385,1689149925793 in 190 msec 2023-07-12 08:18:46,946 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:46,946 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-12 08:18:46,946 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=c1941b6a001319dad431d041648c042b, ASSIGN in 196 msec 2023-07-12 08:18:46,946 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:46,947 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149926947"}]},"ts":"1689149926947"} 2023-07-12 08:18:46,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 08:18:46,947 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 301 msec 2023-07-12 08:18:46,948 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 08:18:46,949 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:46,949 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:46,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:46,954 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:46,954 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54284, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:46,957 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 242 msec 2023-07-12 08:18:46,958 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 08:18:46,964 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:46,967 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-12 08:18:46,969 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 08:18:46,976 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:46,979 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-12 08:18:46,983 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 08:18:46,991 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 08:18:46,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.048sec 2023-07-12 08:18:46,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-12 08:18:46,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:46,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-12 08:18:46,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-12 08:18:46,993 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:46,994 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:46,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-12 08:18:46,996 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/quota/2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 2023-07-12 08:18:46,996 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/quota/2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 empty. 2023-07-12 08:18:46,997 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/quota/2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 2023-07-12 08:18:46,997 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-12 08:18:47,000 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-12 08:18:47,000 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-12 08:18:47,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:47,003 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:47,003 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 08:18:47,003 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 08:18:47,003 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46573,1689149925661-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 08:18:47,003 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46573,1689149925661-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 08:18:47,004 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 08:18:47,012 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:47,014 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, NAME => 'hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp 2023-07-12 08:18:47,016 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46573,1689149925661] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:47,018 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44764, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:47,019 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 08:18:47,019 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 08:18:47,023 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:47,023 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, disabling compactions & flushes 2023-07-12 08:18:47,023 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:47,023 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:47,023 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:47,023 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:47,023 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. after waiting 0 ms 2023-07-12 08:18:47,024 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:47,024 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:47,024 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563: 2023-07-12 08:18:47,025 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 08:18:47,026 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:47,026 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46573,1689149925661] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 08:18:47,026 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689149927026"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149927026"}]},"ts":"1689149927026"} 2023-07-12 08:18:47,028 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:47,028 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:47,028 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149927028"}]},"ts":"1689149927028"} 2023-07-12 08:18:47,029 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-12 08:18:47,033 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:47,033 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:47,033 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:47,033 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:47,033 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:47,033 DEBUG [Listener at localhost/36551] zookeeper.ReadOnlyZKClient(139): Connect 0x4411c3c2 to 127.0.0.1:63658 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:47,033 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, ASSIGN}] 2023-07-12 08:18:47,036 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, ASSIGN 2023-07-12 08:18:47,037 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33385,1689149925793; forceNewPlan=false, retain=false 2023-07-12 08:18:47,039 DEBUG [Listener at localhost/36551] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f77d759, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:47,040 DEBUG [hconnection-0x2492ed2c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:47,042 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37142, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:47,043 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,46573,1689149925661 2023-07-12 08:18:47,043 INFO [Listener at localhost/36551] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:47,045 DEBUG [Listener at localhost/36551] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 08:18:47,047 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46252, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 08:18:47,051 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 08:18:47,052 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:47,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-12 08:18:47,053 DEBUG [Listener at localhost/36551] zookeeper.ReadOnlyZKClient(139): Connect 0x09f33695 to 127.0.0.1:63658 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:47,058 DEBUG [Listener at localhost/36551] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3abb410a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:47,058 INFO [Listener at localhost/36551] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:63658 2023-07-12 08:18:47,060 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:47,061 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101589cec02000a connected 2023-07-12 08:18:47,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-12 08:18:47,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-12 08:18:47,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-12 08:18:47,079 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:47,082 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 16 msec 2023-07-12 08:18:47,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-12 08:18:47,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:47,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-12 08:18:47,180 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:47,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-12 08:18:47,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 08:18:47,182 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:47,182 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 08:18:47,184 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:47,186 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/np1/table1/95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:47,186 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/np1/table1/95d09bb9fb56b2144c39356db424131c empty. 2023-07-12 08:18:47,187 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/np1/table1/95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:47,187 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 08:18:47,187 INFO [jenkins-hbase4:46573] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 08:18:47,188 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:47,188 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689149927188"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149927188"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149927188"}]},"ts":"1689149927188"} 2023-07-12 08:18:47,189 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, server=jenkins-hbase4.apache.org,33385,1689149925793}] 2023-07-12 08:18:47,201 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:47,202 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 95d09bb9fb56b2144c39356db424131c, NAME => 'np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp 2023-07-12 08:18:47,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:47,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 95d09bb9fb56b2144c39356db424131c, disabling compactions & flushes 2023-07-12 08:18:47,215 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. 2023-07-12 08:18:47,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. 2023-07-12 08:18:47,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. after waiting 0 ms 2023-07-12 08:18:47,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. 2023-07-12 08:18:47,216 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. 2023-07-12 08:18:47,216 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 95d09bb9fb56b2144c39356db424131c: 2023-07-12 08:18:47,219 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:47,220 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689149927220"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149927220"}]},"ts":"1689149927220"} 2023-07-12 08:18:47,221 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:47,222 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:47,222 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149927222"}]},"ts":"1689149927222"} 2023-07-12 08:18:47,223 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-12 08:18:47,228 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:47,228 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:47,228 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:47,228 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:47,228 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:47,229 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=95d09bb9fb56b2144c39356db424131c, ASSIGN}] 2023-07-12 08:18:47,229 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=95d09bb9fb56b2144c39356db424131c, ASSIGN 2023-07-12 08:18:47,230 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=95d09bb9fb56b2144c39356db424131c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33385,1689149925793; forceNewPlan=false, retain=false 2023-07-12 08:18:47,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 08:18:47,344 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:47,344 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, NAME => 'hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:47,345 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 2023-07-12 08:18:47,345 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:47,345 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 2023-07-12 08:18:47,345 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 2023-07-12 08:18:47,346 INFO [StoreOpener-2b5fbdc3f7cb8bf3c8f16ff8b8cdc563-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 2023-07-12 08:18:47,348 DEBUG [StoreOpener-2b5fbdc3f7cb8bf3c8f16ff8b8cdc563-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/quota/2b5fbdc3f7cb8bf3c8f16ff8b8cdc563/q 2023-07-12 08:18:47,348 DEBUG [StoreOpener-2b5fbdc3f7cb8bf3c8f16ff8b8cdc563-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/quota/2b5fbdc3f7cb8bf3c8f16ff8b8cdc563/q 2023-07-12 08:18:47,348 INFO [StoreOpener-2b5fbdc3f7cb8bf3c8f16ff8b8cdc563-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 columnFamilyName q 2023-07-12 08:18:47,349 INFO [StoreOpener-2b5fbdc3f7cb8bf3c8f16ff8b8cdc563-1] regionserver.HStore(310): Store=2b5fbdc3f7cb8bf3c8f16ff8b8cdc563/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:47,349 INFO [StoreOpener-2b5fbdc3f7cb8bf3c8f16ff8b8cdc563-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 2023-07-12 08:18:47,350 DEBUG [StoreOpener-2b5fbdc3f7cb8bf3c8f16ff8b8cdc563-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/quota/2b5fbdc3f7cb8bf3c8f16ff8b8cdc563/u 2023-07-12 08:18:47,350 DEBUG [StoreOpener-2b5fbdc3f7cb8bf3c8f16ff8b8cdc563-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/quota/2b5fbdc3f7cb8bf3c8f16ff8b8cdc563/u 2023-07-12 08:18:47,350 INFO [StoreOpener-2b5fbdc3f7cb8bf3c8f16ff8b8cdc563-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 columnFamilyName u 2023-07-12 08:18:47,351 INFO [StoreOpener-2b5fbdc3f7cb8bf3c8f16ff8b8cdc563-1] regionserver.HStore(310): Store=2b5fbdc3f7cb8bf3c8f16ff8b8cdc563/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:47,352 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/quota/2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 2023-07-12 08:18:47,352 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/quota/2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 2023-07-12 08:18:47,354 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-12 08:18:47,356 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 2023-07-12 08:18:47,358 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/quota/2b5fbdc3f7cb8bf3c8f16ff8b8cdc563/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:47,359 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11478950240, jitterRate=0.0690605491399765}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-12 08:18:47,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563: 2023-07-12 08:18:47,360 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563., pid=16, masterSystemTime=1689149927341 2023-07-12 08:18:47,361 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:47,361 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:47,361 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:47,361 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689149927361"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149927361"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149927361"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149927361"}]},"ts":"1689149927361"} 2023-07-12 08:18:47,364 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-12 08:18:47,364 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, server=jenkins-hbase4.apache.org,33385,1689149925793 in 174 msec 2023-07-12 08:18:47,366 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 08:18:47,366 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, ASSIGN in 331 msec 2023-07-12 08:18:47,367 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:47,367 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149927367"}]},"ts":"1689149927367"} 2023-07-12 08:18:47,368 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-12 08:18:47,370 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:47,371 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 379 msec 2023-07-12 08:18:47,380 INFO [jenkins-hbase4:46573] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 08:18:47,382 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=95d09bb9fb56b2144c39356db424131c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:47,382 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689149927381"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149927381"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149927381"}]},"ts":"1689149927381"} 2023-07-12 08:18:47,383 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 95d09bb9fb56b2144c39356db424131c, server=jenkins-hbase4.apache.org,33385,1689149925793}] 2023-07-12 08:18:47,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 08:18:47,538 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. 2023-07-12 08:18:47,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 95d09bb9fb56b2144c39356db424131c, NAME => 'np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:47,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:47,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:47,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:47,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:47,539 INFO [StoreOpener-95d09bb9fb56b2144c39356db424131c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:47,541 DEBUG [StoreOpener-95d09bb9fb56b2144c39356db424131c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/np1/table1/95d09bb9fb56b2144c39356db424131c/fam1 2023-07-12 08:18:47,541 DEBUG [StoreOpener-95d09bb9fb56b2144c39356db424131c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/np1/table1/95d09bb9fb56b2144c39356db424131c/fam1 2023-07-12 08:18:47,541 INFO [StoreOpener-95d09bb9fb56b2144c39356db424131c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 95d09bb9fb56b2144c39356db424131c columnFamilyName fam1 2023-07-12 08:18:47,542 INFO [StoreOpener-95d09bb9fb56b2144c39356db424131c-1] regionserver.HStore(310): Store=95d09bb9fb56b2144c39356db424131c/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:47,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/np1/table1/95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:47,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/np1/table1/95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:47,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:47,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/np1/table1/95d09bb9fb56b2144c39356db424131c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:47,547 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 95d09bb9fb56b2144c39356db424131c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11576096640, jitterRate=0.07810801267623901}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:47,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 95d09bb9fb56b2144c39356db424131c: 2023-07-12 08:18:47,548 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c., pid=18, masterSystemTime=1689149927534 2023-07-12 08:18:47,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. 2023-07-12 08:18:47,549 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. 2023-07-12 08:18:47,550 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=95d09bb9fb56b2144c39356db424131c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:47,550 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689149927550"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149927550"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149927550"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149927550"}]},"ts":"1689149927550"} 2023-07-12 08:18:47,553 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-12 08:18:47,553 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 95d09bb9fb56b2144c39356db424131c, server=jenkins-hbase4.apache.org,33385,1689149925793 in 168 msec 2023-07-12 08:18:47,554 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-12 08:18:47,554 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=95d09bb9fb56b2144c39356db424131c, ASSIGN in 324 msec 2023-07-12 08:18:47,555 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:47,555 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149927555"}]},"ts":"1689149927555"} 2023-07-12 08:18:47,556 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-12 08:18:47,558 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:47,559 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 381 msec 2023-07-12 08:18:47,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 08:18:47,785 INFO [Listener at localhost/36551] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-12 08:18:47,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:47,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-12 08:18:47,788 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:47,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-12 08:18:47,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 08:18:47,808 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=21 msec 2023-07-12 08:18:47,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 08:18:47,893 INFO [Listener at localhost/36551] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-12 08:18:47,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:47,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:47,895 INFO [Listener at localhost/36551] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-12 08:18:47,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-12 08:18:47,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-12 08:18:47,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 08:18:47,903 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149927903"}]},"ts":"1689149927903"} 2023-07-12 08:18:47,904 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-12 08:18:47,906 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-12 08:18:47,907 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=95d09bb9fb56b2144c39356db424131c, UNASSIGN}] 2023-07-12 08:18:47,908 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=95d09bb9fb56b2144c39356db424131c, UNASSIGN 2023-07-12 08:18:47,908 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=95d09bb9fb56b2144c39356db424131c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:47,909 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689149927908"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149927908"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149927908"}]},"ts":"1689149927908"} 2023-07-12 08:18:47,910 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 95d09bb9fb56b2144c39356db424131c, server=jenkins-hbase4.apache.org,33385,1689149925793}] 2023-07-12 08:18:48,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 08:18:48,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:48,064 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 95d09bb9fb56b2144c39356db424131c, disabling compactions & flushes 2023-07-12 08:18:48,064 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. 2023-07-12 08:18:48,064 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. 2023-07-12 08:18:48,064 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. after waiting 0 ms 2023-07-12 08:18:48,064 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. 2023-07-12 08:18:48,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/np1/table1/95d09bb9fb56b2144c39356db424131c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:48,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c. 2023-07-12 08:18:48,069 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 95d09bb9fb56b2144c39356db424131c: 2023-07-12 08:18:48,071 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:48,071 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=95d09bb9fb56b2144c39356db424131c, regionState=CLOSED 2023-07-12 08:18:48,071 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689149928071"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149928071"}]},"ts":"1689149928071"} 2023-07-12 08:18:48,075 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-12 08:18:48,075 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 95d09bb9fb56b2144c39356db424131c, server=jenkins-hbase4.apache.org,33385,1689149925793 in 162 msec 2023-07-12 08:18:48,077 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-12 08:18:48,077 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=95d09bb9fb56b2144c39356db424131c, UNASSIGN in 168 msec 2023-07-12 08:18:48,078 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149928078"}]},"ts":"1689149928078"} 2023-07-12 08:18:48,079 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-12 08:18:48,084 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-12 08:18:48,085 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 188 msec 2023-07-12 08:18:48,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 08:18:48,202 INFO [Listener at localhost/36551] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-12 08:18:48,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-12 08:18:48,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-12 08:18:48,205 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 08:18:48,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-12 08:18:48,206 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 08:18:48,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:48,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 08:18:48,210 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/np1/table1/95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:48,212 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/np1/table1/95d09bb9fb56b2144c39356db424131c/fam1, FileablePath, hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/np1/table1/95d09bb9fb56b2144c39356db424131c/recovered.edits] 2023-07-12 08:18:48,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 08:18:48,218 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/np1/table1/95d09bb9fb56b2144c39356db424131c/recovered.edits/4.seqid to hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/archive/data/np1/table1/95d09bb9fb56b2144c39356db424131c/recovered.edits/4.seqid 2023-07-12 08:18:48,219 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/.tmp/data/np1/table1/95d09bb9fb56b2144c39356db424131c 2023-07-12 08:18:48,219 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 08:18:48,221 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 08:18:48,223 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-12 08:18:48,225 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-12 08:18:48,226 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 08:18:48,226 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-12 08:18:48,226 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149928226"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:48,227 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 08:18:48,227 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 95d09bb9fb56b2144c39356db424131c, NAME => 'np1:table1,,1689149927176.95d09bb9fb56b2144c39356db424131c.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 08:18:48,227 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-12 08:18:48,227 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689149928227"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:48,228 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-12 08:18:48,232 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 08:18:48,233 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 30 msec 2023-07-12 08:18:48,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 08:18:48,314 INFO [Listener at localhost/36551] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-12 08:18:48,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-12 08:18:48,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-12 08:18:48,333 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 08:18:48,360 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 08:18:48,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 08:18:48,366 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 08:18:48,369 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-12 08:18:48,369 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:48,370 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 08:18:48,372 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 08:18:48,375 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 51 msec 2023-07-12 08:18:48,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46573] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 08:18:48,465 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 08:18:48,465 INFO [Listener at localhost/36551] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 08:18:48,465 DEBUG [Listener at localhost/36551] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4411c3c2 to 127.0.0.1:63658 2023-07-12 08:18:48,465 DEBUG [Listener at localhost/36551] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:48,465 DEBUG [Listener at localhost/36551] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 08:18:48,465 DEBUG [Listener at localhost/36551] util.JVMClusterUtil(257): Found active master hash=650457962, stopped=false 2023-07-12 08:18:48,466 DEBUG [Listener at localhost/36551] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 08:18:48,466 DEBUG [Listener at localhost/36551] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 08:18:48,466 DEBUG [Listener at localhost/36551] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-12 08:18:48,466 INFO [Listener at localhost/36551] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,46573,1689149925661 2023-07-12 08:18:48,469 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:48,469 INFO [Listener at localhost/36551] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 08:18:48,469 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:48,469 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:48,470 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:48,470 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:48,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:48,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:48,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:48,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:48,471 DEBUG [Listener at localhost/36551] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7e9ee6b5 to 127.0.0.1:63658 2023-07-12 08:18:48,471 DEBUG [Listener at localhost/36551] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:48,472 INFO [Listener at localhost/36551] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46091,1689149925750' ***** 2023-07-12 08:18:48,472 INFO [Listener at localhost/36551] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 08:18:48,472 INFO [Listener at localhost/36551] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33385,1689149925793' ***** 2023-07-12 08:18:48,472 INFO [Listener at localhost/36551] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 08:18:48,472 INFO [Listener at localhost/36551] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39181,1689149925844' ***** 2023-07-12 08:18:48,472 INFO [Listener at localhost/36551] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 08:18:48,472 INFO [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:48,472 INFO [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:48,472 INFO [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:48,483 INFO [RS:1;jenkins-hbase4:33385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2b5e8561{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:48,483 INFO [RS:0;jenkins-hbase4:46091] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3f6896e5{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:48,483 INFO [RS:2;jenkins-hbase4:39181] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1cbd4662{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:48,484 INFO [RS:1;jenkins-hbase4:33385] server.AbstractConnector(383): Stopped ServerConnector@7aa1b707{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:48,484 INFO [RS:2;jenkins-hbase4:39181] server.AbstractConnector(383): Stopped ServerConnector@2be675c5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:48,484 INFO [RS:1;jenkins-hbase4:33385] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:48,484 INFO [RS:0;jenkins-hbase4:46091] server.AbstractConnector(383): Stopped ServerConnector@37060062{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:48,485 INFO [RS:0;jenkins-hbase4:46091] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:48,484 INFO [RS:2;jenkins-hbase4:39181] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:48,488 INFO [RS:0;jenkins-hbase4:46091] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@150457ec{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:48,485 INFO [RS:1;jenkins-hbase4:33385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6771d4ba{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:48,488 INFO [RS:0;jenkins-hbase4:46091] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e0a0d01{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:48,488 INFO [RS:1;jenkins-hbase4:33385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6de35580{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:48,488 INFO [RS:2;jenkins-hbase4:39181] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1d5d1fe6{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:48,489 INFO [RS:2;jenkins-hbase4:39181] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4b95847b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:48,489 INFO [RS:1;jenkins-hbase4:33385] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 08:18:48,489 INFO [RS:1;jenkins-hbase4:33385] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 08:18:48,489 INFO [RS:1;jenkins-hbase4:33385] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 08:18:48,489 INFO [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(3305): Received CLOSE for c1941b6a001319dad431d041648c042b 2023-07-12 08:18:48,489 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 08:18:48,489 INFO [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(3305): Received CLOSE for 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563 2023-07-12 08:18:48,490 INFO [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:48,490 DEBUG [RS:1;jenkins-hbase4:33385] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6cf9f1fd to 127.0.0.1:63658 2023-07-12 08:18:48,490 INFO [RS:2;jenkins-hbase4:39181] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 08:18:48,490 DEBUG [RS:1;jenkins-hbase4:33385] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:48,490 INFO [RS:2;jenkins-hbase4:39181] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 08:18:48,493 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 08:18:48,493 INFO [RS:2;jenkins-hbase4:39181] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 08:18:48,493 INFO [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-12 08:18:48,493 INFO [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(3305): Received CLOSE for cafa241b8342fa7f378b4b53a44ba703 2023-07-12 08:18:48,493 DEBUG [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1478): Online Regions={c1941b6a001319dad431d041648c042b=hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b., 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563=hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563.} 2023-07-12 08:18:48,494 DEBUG [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1504): Waiting on 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, c1941b6a001319dad431d041648c042b 2023-07-12 08:18:48,494 INFO [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:48,494 DEBUG [RS:2;jenkins-hbase4:39181] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x40ec0d9e to 127.0.0.1:63658 2023-07-12 08:18:48,494 DEBUG [RS:2;jenkins-hbase4:39181] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:48,497 INFO [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 08:18:48,497 DEBUG [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1478): Online Regions={cafa241b8342fa7f378b4b53a44ba703=hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703.} 2023-07-12 08:18:48,497 DEBUG [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1504): Waiting on cafa241b8342fa7f378b4b53a44ba703 2023-07-12 08:18:48,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c1941b6a001319dad431d041648c042b, disabling compactions & flushes 2023-07-12 08:18:48,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:48,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:48,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. after waiting 0 ms 2023-07-12 08:18:48,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:48,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c1941b6a001319dad431d041648c042b 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-12 08:18:48,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cafa241b8342fa7f378b4b53a44ba703, disabling compactions & flushes 2023-07-12 08:18:48,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:48,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:48,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. after waiting 0 ms 2023-07-12 08:18:48,498 INFO [RS:0;jenkins-hbase4:46091] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 08:18:48,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:48,498 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 08:18:48,498 INFO [RS:0;jenkins-hbase4:46091] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 08:18:48,499 INFO [RS:0;jenkins-hbase4:46091] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 08:18:48,499 INFO [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:48,499 DEBUG [RS:0;jenkins-hbase4:46091] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x62fbe037 to 127.0.0.1:63658 2023-07-12 08:18:48,499 DEBUG [RS:0;jenkins-hbase4:46091] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:48,499 INFO [RS:0;jenkins-hbase4:46091] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 08:18:48,499 INFO [RS:0;jenkins-hbase4:46091] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 08:18:48,499 INFO [RS:0;jenkins-hbase4:46091] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 08:18:48,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing cafa241b8342fa7f378b4b53a44ba703 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-12 08:18:48,499 INFO [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 08:18:48,501 INFO [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 08:18:48,502 DEBUG [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-12 08:18:48,502 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 08:18:48,502 DEBUG [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 08:18:48,502 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 08:18:48,502 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 08:18:48,502 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 08:18:48,502 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 08:18:48,502 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-12 08:18:48,522 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:48,524 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:48,525 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:48,551 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b/.tmp/m/a549093ed73c416f947d4ae9fbbd9abd 2023-07-12 08:18:48,551 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703/.tmp/info/415407be4ee84eb080b6e2a0d85a854b 2023-07-12 08:18:48,556 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/.tmp/info/46e7e01f13dd495590babc8a05c6df79 2023-07-12 08:18:48,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b/.tmp/m/a549093ed73c416f947d4ae9fbbd9abd as hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b/m/a549093ed73c416f947d4ae9fbbd9abd 2023-07-12 08:18:48,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 415407be4ee84eb080b6e2a0d85a854b 2023-07-12 08:18:48,567 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 46e7e01f13dd495590babc8a05c6df79 2023-07-12 08:18:48,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703/.tmp/info/415407be4ee84eb080b6e2a0d85a854b as hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703/info/415407be4ee84eb080b6e2a0d85a854b 2023-07-12 08:18:48,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b/m/a549093ed73c416f947d4ae9fbbd9abd, entries=1, sequenceid=7, filesize=4.9 K 2023-07-12 08:18:48,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 415407be4ee84eb080b6e2a0d85a854b 2023-07-12 08:18:48,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703/info/415407be4ee84eb080b6e2a0d85a854b, entries=3, sequenceid=8, filesize=5.0 K 2023-07-12 08:18:48,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for c1941b6a001319dad431d041648c042b in 88ms, sequenceid=7, compaction requested=false 2023-07-12 08:18:48,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for cafa241b8342fa7f378b4b53a44ba703 in 87ms, sequenceid=8, compaction requested=false 2023-07-12 08:18:48,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 08:18:48,632 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/.tmp/rep_barrier/732921a263824dbc8f5ac8f3a13cd594 2023-07-12 08:18:48,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/namespace/cafa241b8342fa7f378b4b53a44ba703/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-12 08:18:48,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:48,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cafa241b8342fa7f378b4b53a44ba703: 2023-07-12 08:18:48,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689149926645.cafa241b8342fa7f378b4b53a44ba703. 2023-07-12 08:18:48,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/rsgroup/c1941b6a001319dad431d041648c042b/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-12 08:18:48,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 08:18:48,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:48,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c1941b6a001319dad431d041648c042b: 2023-07-12 08:18:48,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689149926712.c1941b6a001319dad431d041648c042b. 2023-07-12 08:18:48,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563, disabling compactions & flushes 2023-07-12 08:18:48,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:48,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:48,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. after waiting 0 ms 2023-07-12 08:18:48,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:48,651 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 732921a263824dbc8f5ac8f3a13cd594 2023-07-12 08:18:48,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/quota/2b5fbdc3f7cb8bf3c8f16ff8b8cdc563/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:48,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:48,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2b5fbdc3f7cb8bf3c8f16ff8b8cdc563: 2023-07-12 08:18:48,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689149926991.2b5fbdc3f7cb8bf3c8f16ff8b8cdc563. 2023-07-12 08:18:48,688 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/.tmp/table/fb04ebb24e5a4f3781a1a1df56f8869f 2023-07-12 08:18:48,694 INFO [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33385,1689149925793; all regions closed. 2023-07-12 08:18:48,694 DEBUG [RS:1;jenkins-hbase4:33385] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 08:18:48,694 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fb04ebb24e5a4f3781a1a1df56f8869f 2023-07-12 08:18:48,697 INFO [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39181,1689149925844; all regions closed. 2023-07-12 08:18:48,697 DEBUG [RS:2;jenkins-hbase4:39181] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 08:18:48,699 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/.tmp/info/46e7e01f13dd495590babc8a05c6df79 as hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/info/46e7e01f13dd495590babc8a05c6df79 2023-07-12 08:18:48,702 DEBUG [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 08:18:48,706 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 46e7e01f13dd495590babc8a05c6df79 2023-07-12 08:18:48,706 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/info/46e7e01f13dd495590babc8a05c6df79, entries=32, sequenceid=31, filesize=8.5 K 2023-07-12 08:18:48,707 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/.tmp/rep_barrier/732921a263824dbc8f5ac8f3a13cd594 as hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/rep_barrier/732921a263824dbc8f5ac8f3a13cd594 2023-07-12 08:18:48,716 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 732921a263824dbc8f5ac8f3a13cd594 2023-07-12 08:18:48,716 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/rep_barrier/732921a263824dbc8f5ac8f3a13cd594, entries=1, sequenceid=31, filesize=4.9 K 2023-07-12 08:18:48,717 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/.tmp/table/fb04ebb24e5a4f3781a1a1df56f8869f as hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/table/fb04ebb24e5a4f3781a1a1df56f8869f 2023-07-12 08:18:48,734 DEBUG [RS:1;jenkins-hbase4:33385] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/oldWALs 2023-07-12 08:18:48,734 INFO [RS:1;jenkins-hbase4:33385] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33385%2C1689149925793:(num 1689149926366) 2023-07-12 08:18:48,734 DEBUG [RS:1;jenkins-hbase4:33385] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:48,734 INFO [RS:1;jenkins-hbase4:33385] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:48,735 INFO [RS:1;jenkins-hbase4:33385] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:48,735 INFO [RS:1;jenkins-hbase4:33385] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 08:18:48,735 INFO [RS:1;jenkins-hbase4:33385] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 08:18:48,735 INFO [RS:1;jenkins-hbase4:33385] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 08:18:48,736 INFO [RS:1;jenkins-hbase4:33385] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33385 2023-07-12 08:18:48,737 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:48,738 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fb04ebb24e5a4f3781a1a1df56f8869f 2023-07-12 08:18:48,739 DEBUG [RS:2;jenkins-hbase4:39181] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/oldWALs 2023-07-12 08:18:48,739 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/table/fb04ebb24e5a4f3781a1a1df56f8869f, entries=8, sequenceid=31, filesize=5.2 K 2023-07-12 08:18:48,739 INFO [RS:2;jenkins-hbase4:39181] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39181%2C1689149925844:(num 1689149926366) 2023-07-12 08:18:48,739 DEBUG [RS:2;jenkins-hbase4:39181] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:48,739 INFO [RS:2;jenkins-hbase4:39181] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:48,739 INFO [RS:2;jenkins-hbase4:39181] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:48,739 INFO [RS:2;jenkins-hbase4:39181] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 08:18:48,739 INFO [RS:2;jenkins-hbase4:39181] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 08:18:48,739 INFO [RS:2;jenkins-hbase4:39181] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 08:18:48,739 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:48,741 INFO [RS:2;jenkins-hbase4:39181] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39181 2023-07-12 08:18:48,743 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 241ms, sequenceid=31, compaction requested=false 2023-07-12 08:18:48,744 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:48,744 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:48,744 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:48,744 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:48,744 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:48,744 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33385,1689149925793 2023-07-12 08:18:48,744 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:48,745 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:48,745 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:48,745 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39181,1689149925844 2023-07-12 08:18:48,761 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39181,1689149925844] 2023-07-12 08:18:48,761 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39181,1689149925844; numProcessing=1 2023-07-12 08:18:48,784 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-12 08:18:48,785 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 08:18:48,786 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 08:18:48,786 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 08:18:48,786 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 08:18:48,861 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:48,861 INFO [RS:1;jenkins-hbase4:33385] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33385,1689149925793; zookeeper connection closed. 2023-07-12 08:18:48,861 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:33385-0x101589cec020002, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:48,862 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3d9e4479] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3d9e4479 2023-07-12 08:18:48,863 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39181,1689149925844 already deleted, retry=false 2023-07-12 08:18:48,863 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39181,1689149925844 expired; onlineServers=2 2023-07-12 08:18:48,863 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33385,1689149925793] 2023-07-12 08:18:48,863 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33385,1689149925793; numProcessing=2 2023-07-12 08:18:48,865 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33385,1689149925793 already deleted, retry=false 2023-07-12 08:18:48,865 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33385,1689149925793 expired; onlineServers=1 2023-07-12 08:18:48,872 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:48,872 INFO [RS:2;jenkins-hbase4:39181] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39181,1689149925844; zookeeper connection closed. 2023-07-12 08:18:48,872 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:39181-0x101589cec020003, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:48,873 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@76aac536] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@76aac536 2023-07-12 08:18:48,902 INFO [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46091,1689149925750; all regions closed. 2023-07-12 08:18:48,902 DEBUG [RS:0;jenkins-hbase4:46091] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 08:18:48,909 DEBUG [RS:0;jenkins-hbase4:46091] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/oldWALs 2023-07-12 08:18:48,909 INFO [RS:0;jenkins-hbase4:46091] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46091%2C1689149925750.meta:.meta(num 1689149926562) 2023-07-12 08:18:48,917 DEBUG [RS:0;jenkins-hbase4:46091] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/oldWALs 2023-07-12 08:18:48,917 INFO [RS:0;jenkins-hbase4:46091] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46091%2C1689149925750:(num 1689149926359) 2023-07-12 08:18:48,917 DEBUG [RS:0;jenkins-hbase4:46091] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:48,917 INFO [RS:0;jenkins-hbase4:46091] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:48,917 INFO [RS:0;jenkins-hbase4:46091] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:48,917 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:48,918 INFO [RS:0;jenkins-hbase4:46091] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46091 2023-07-12 08:18:48,921 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:48,921 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46091,1689149925750 2023-07-12 08:18:48,922 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46091,1689149925750] 2023-07-12 08:18:48,922 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46091,1689149925750; numProcessing=3 2023-07-12 08:18:48,924 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46091,1689149925750 already deleted, retry=false 2023-07-12 08:18:48,924 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46091,1689149925750 expired; onlineServers=0 2023-07-12 08:18:48,924 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46573,1689149925661' ***** 2023-07-12 08:18:48,924 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 08:18:48,925 DEBUG [M:0;jenkins-hbase4:46573] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22faa36a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:48,925 INFO [M:0;jenkins-hbase4:46573] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:48,927 INFO [M:0;jenkins-hbase4:46573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@49ce88f{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 08:18:48,928 INFO [M:0;jenkins-hbase4:46573] server.AbstractConnector(383): Stopped ServerConnector@262ac480{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:48,928 INFO [M:0;jenkins-hbase4:46573] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:48,928 INFO [M:0;jenkins-hbase4:46573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6618da3c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:48,928 INFO [M:0;jenkins-hbase4:46573] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6eca1326{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:48,929 INFO [M:0;jenkins-hbase4:46573] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46573,1689149925661 2023-07-12 08:18:48,929 INFO [M:0;jenkins-hbase4:46573] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46573,1689149925661; all regions closed. 2023-07-12 08:18:48,929 DEBUG [M:0;jenkins-hbase4:46573] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:48,929 INFO [M:0;jenkins-hbase4:46573] master.HMaster(1491): Stopping master jetty server 2023-07-12 08:18:48,930 INFO [M:0;jenkins-hbase4:46573] server.AbstractConnector(383): Stopped ServerConnector@54dab6ef{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:48,930 DEBUG [M:0;jenkins-hbase4:46573] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 08:18:48,930 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 08:18:48,930 DEBUG [M:0;jenkins-hbase4:46573] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 08:18:48,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689149926112] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689149926112,5,FailOnTimeoutGroup] 2023-07-12 08:18:48,931 INFO [M:0;jenkins-hbase4:46573] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 08:18:48,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689149926112] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689149926112,5,FailOnTimeoutGroup] 2023-07-12 08:18:48,931 INFO [M:0;jenkins-hbase4:46573] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 08:18:48,932 INFO [M:0;jenkins-hbase4:46573] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:48,932 DEBUG [M:0;jenkins-hbase4:46573] master.HMaster(1512): Stopping service threads 2023-07-12 08:18:48,932 INFO [M:0;jenkins-hbase4:46573] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 08:18:48,932 ERROR [M:0;jenkins-hbase4:46573] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 08:18:48,932 INFO [M:0;jenkins-hbase4:46573] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 08:18:48,933 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 08:18:49,023 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:49,023 INFO [RS:0;jenkins-hbase4:46091] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46091,1689149925750; zookeeper connection closed. 2023-07-12 08:18:49,023 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x101589cec020001, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:49,024 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@45a656a8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@45a656a8 2023-07-12 08:18:49,024 INFO [Listener at localhost/36551] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-12 08:18:49,025 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:49,025 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:49,025 INFO [M:0;jenkins-hbase4:46573] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 08:18:49,026 INFO [M:0;jenkins-hbase4:46573] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 08:18:49,026 DEBUG [M:0;jenkins-hbase4:46573] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 08:18:49,026 INFO [M:0;jenkins-hbase4:46573] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:49,026 DEBUG [M:0;jenkins-hbase4:46573] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:49,026 DEBUG [M:0;jenkins-hbase4:46573] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 08:18:49,026 DEBUG [M:0;jenkins-hbase4:46573] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:49,026 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/master already deleted, retry=false 2023-07-12 08:18:49,026 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:49,026 DEBUG [RegionServerTracker-0] master.ActiveMasterManager(335): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Failed delete of our master address node; KeeperErrorCode = NoNode for /hbase/master 2023-07-12 08:18:49,026 INFO [M:0;jenkins-hbase4:46573] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.98 KB heapSize=109.13 KB 2023-07-12 08:18:49,045 INFO [M:0;jenkins-hbase4:46573] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.98 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0025d47178a540e2aad083dd8d4b4ed4 2023-07-12 08:18:49,050 DEBUG [M:0;jenkins-hbase4:46573] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0025d47178a540e2aad083dd8d4b4ed4 as hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0025d47178a540e2aad083dd8d4b4ed4 2023-07-12 08:18:49,055 INFO [M:0;jenkins-hbase4:46573] regionserver.HStore(1080): Added hdfs://localhost:41039/user/jenkins/test-data/edec56ee-97a5-f90d-23ad-d2e72bfd5efc/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0025d47178a540e2aad083dd8d4b4ed4, entries=24, sequenceid=194, filesize=12.4 K 2023-07-12 08:18:49,056 INFO [M:0;jenkins-hbase4:46573] regionserver.HRegion(2948): Finished flush of dataSize ~92.98 KB/95214, heapSize ~109.11 KB/111728, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=194, compaction requested=false 2023-07-12 08:18:49,057 INFO [M:0;jenkins-hbase4:46573] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:49,057 DEBUG [M:0;jenkins-hbase4:46573] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 08:18:49,061 INFO [M:0;jenkins-hbase4:46573] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 08:18:49,061 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:49,062 INFO [M:0;jenkins-hbase4:46573] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46573 2023-07-12 08:18:49,063 DEBUG [M:0;jenkins-hbase4:46573] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,46573,1689149925661 already deleted, retry=false 2023-07-12 08:18:49,166 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:49,166 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): master:46573-0x101589cec020000, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:49,166 INFO [M:0;jenkins-hbase4:46573] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46573,1689149925661; zookeeper connection closed. 2023-07-12 08:18:49,167 WARN [Listener at localhost/36551] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 08:18:49,172 INFO [Listener at localhost/36551] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:49,277 WARN [BP-834829317-172.31.14.131-1689149924699 heartbeating to localhost/127.0.0.1:41039] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 08:18:49,277 WARN [BP-834829317-172.31.14.131-1689149924699 heartbeating to localhost/127.0.0.1:41039] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-834829317-172.31.14.131-1689149924699 (Datanode Uuid bc8298c0-2d4e-4b98-bc4f-3518c111353c) service to localhost/127.0.0.1:41039 2023-07-12 08:18:49,278 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/cluster_f95a6f5f-e6de-54b0-cd8f-3d48d26e596e/dfs/data/data5/current/BP-834829317-172.31.14.131-1689149924699] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:49,278 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/cluster_f95a6f5f-e6de-54b0-cd8f-3d48d26e596e/dfs/data/data6/current/BP-834829317-172.31.14.131-1689149924699] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:49,280 WARN [Listener at localhost/36551] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 08:18:49,283 INFO [Listener at localhost/36551] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:49,388 WARN [BP-834829317-172.31.14.131-1689149924699 heartbeating to localhost/127.0.0.1:41039] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 08:18:49,388 WARN [BP-834829317-172.31.14.131-1689149924699 heartbeating to localhost/127.0.0.1:41039] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-834829317-172.31.14.131-1689149924699 (Datanode Uuid 24da76ec-e760-43b4-aba0-cb849c5ee77a) service to localhost/127.0.0.1:41039 2023-07-12 08:18:49,388 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/cluster_f95a6f5f-e6de-54b0-cd8f-3d48d26e596e/dfs/data/data3/current/BP-834829317-172.31.14.131-1689149924699] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:49,389 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/cluster_f95a6f5f-e6de-54b0-cd8f-3d48d26e596e/dfs/data/data4/current/BP-834829317-172.31.14.131-1689149924699] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:49,391 WARN [Listener at localhost/36551] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 08:18:49,393 INFO [Listener at localhost/36551] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:49,496 WARN [BP-834829317-172.31.14.131-1689149924699 heartbeating to localhost/127.0.0.1:41039] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 08:18:49,496 WARN [BP-834829317-172.31.14.131-1689149924699 heartbeating to localhost/127.0.0.1:41039] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-834829317-172.31.14.131-1689149924699 (Datanode Uuid 829e6aa1-f50e-41de-9a02-e7f35ecfc368) service to localhost/127.0.0.1:41039 2023-07-12 08:18:49,496 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/cluster_f95a6f5f-e6de-54b0-cd8f-3d48d26e596e/dfs/data/data1/current/BP-834829317-172.31.14.131-1689149924699] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:49,497 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/cluster_f95a6f5f-e6de-54b0-cd8f-3d48d26e596e/dfs/data/data2/current/BP-834829317-172.31.14.131-1689149924699] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:49,505 INFO [Listener at localhost/36551] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:49,621 INFO [Listener at localhost/36551] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 08:18:49,657 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 08:18:49,657 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 08:18:49,657 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.log.dir so I do NOT create it in target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e 2023-07-12 08:18:49,657 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9819fb00-75f1-22a9-e93e-9f4fd51877ae/hadoop.tmp.dir so I do NOT create it in target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e 2023-07-12 08:18:49,658 INFO [Listener at localhost/36551] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034, deleteOnExit=true 2023-07-12 08:18:49,658 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 08:18:49,658 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/test.cache.data in system properties and HBase conf 2023-07-12 08:18:49,658 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 08:18:49,658 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.log.dir in system properties and HBase conf 2023-07-12 08:18:49,658 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 08:18:49,658 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 08:18:49,659 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 08:18:49,659 DEBUG [Listener at localhost/36551] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 08:18:49,659 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 08:18:49,659 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 08:18:49,659 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 08:18:49,660 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 08:18:49,660 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 08:18:49,660 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 08:18:49,660 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 08:18:49,660 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 08:18:49,661 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 08:18:49,661 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/nfs.dump.dir in system properties and HBase conf 2023-07-12 08:18:49,661 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/java.io.tmpdir in system properties and HBase conf 2023-07-12 08:18:49,661 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 08:18:49,661 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 08:18:49,662 INFO [Listener at localhost/36551] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 08:18:49,666 WARN [Listener at localhost/36551] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 08:18:49,667 WARN [Listener at localhost/36551] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 08:18:49,718 DEBUG [Listener at localhost/36551-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101589cec02000a, quorum=127.0.0.1:63658, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 08:18:49,718 WARN [Listener at localhost/36551] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:49,719 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101589cec02000a, quorum=127.0.0.1:63658, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 08:18:49,721 INFO [Listener at localhost/36551] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:49,732 INFO [Listener at localhost/36551] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/java.io.tmpdir/Jetty_localhost_35197_hdfs____4yly53/webapp 2023-07-12 08:18:49,839 INFO [Listener at localhost/36551] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35197 2023-07-12 08:18:49,843 WARN [Listener at localhost/36551] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 08:18:49,843 WARN [Listener at localhost/36551] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 08:18:49,887 WARN [Listener at localhost/41445] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:49,896 WARN [Listener at localhost/41445] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 08:18:49,898 WARN [Listener at localhost/41445] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:49,899 INFO [Listener at localhost/41445] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:49,905 INFO [Listener at localhost/41445] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/java.io.tmpdir/Jetty_localhost_44331_datanode____o86t2w/webapp 2023-07-12 08:18:50,005 INFO [Listener at localhost/41445] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44331 2023-07-12 08:18:50,012 WARN [Listener at localhost/36511] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:50,026 WARN [Listener at localhost/36511] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 08:18:50,028 WARN [Listener at localhost/36511] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:50,029 INFO [Listener at localhost/36511] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:50,031 INFO [Listener at localhost/36511] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/java.io.tmpdir/Jetty_localhost_35817_datanode____yn2bvn/webapp 2023-07-12 08:18:50,116 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xffbe0f9427532888: Processing first storage report for DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b from datanode e2d555f1-42f6-4c7a-b7b1-2b42acfa1735 2023-07-12 08:18:50,116 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xffbe0f9427532888: from storage DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b node DatanodeRegistration(127.0.0.1:46145, datanodeUuid=e2d555f1-42f6-4c7a-b7b1-2b42acfa1735, infoPort=38393, infoSecurePort=0, ipcPort=36511, storageInfo=lv=-57;cid=testClusterID;nsid=1159754126;c=1689149929669), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:50,117 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xffbe0f9427532888: Processing first storage report for DS-a2e636f4-9595-4f65-ad92-7e2f125695b9 from datanode e2d555f1-42f6-4c7a-b7b1-2b42acfa1735 2023-07-12 08:18:50,117 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xffbe0f9427532888: from storage DS-a2e636f4-9595-4f65-ad92-7e2f125695b9 node DatanodeRegistration(127.0.0.1:46145, datanodeUuid=e2d555f1-42f6-4c7a-b7b1-2b42acfa1735, infoPort=38393, infoSecurePort=0, ipcPort=36511, storageInfo=lv=-57;cid=testClusterID;nsid=1159754126;c=1689149929669), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:50,132 INFO [Listener at localhost/36511] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35817 2023-07-12 08:18:50,138 WARN [Listener at localhost/42867] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:50,152 WARN [Listener at localhost/42867] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 08:18:50,154 WARN [Listener at localhost/42867] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 08:18:50,155 INFO [Listener at localhost/42867] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 08:18:50,159 INFO [Listener at localhost/42867] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/java.io.tmpdir/Jetty_localhost_36645_datanode____vzhe57/webapp 2023-07-12 08:18:50,240 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfc2f101a780e4c77: Processing first storage report for DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7 from datanode 58e2d8f7-dd04-4c9e-916c-f47f8ec3e74c 2023-07-12 08:18:50,240 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfc2f101a780e4c77: from storage DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7 node DatanodeRegistration(127.0.0.1:44487, datanodeUuid=58e2d8f7-dd04-4c9e-916c-f47f8ec3e74c, infoPort=43789, infoSecurePort=0, ipcPort=42867, storageInfo=lv=-57;cid=testClusterID;nsid=1159754126;c=1689149929669), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:50,240 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfc2f101a780e4c77: Processing first storage report for DS-03d49a99-c10d-4e51-8fc8-e079aafcd3f3 from datanode 58e2d8f7-dd04-4c9e-916c-f47f8ec3e74c 2023-07-12 08:18:50,240 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfc2f101a780e4c77: from storage DS-03d49a99-c10d-4e51-8fc8-e079aafcd3f3 node DatanodeRegistration(127.0.0.1:44487, datanodeUuid=58e2d8f7-dd04-4c9e-916c-f47f8ec3e74c, infoPort=43789, infoSecurePort=0, ipcPort=42867, storageInfo=lv=-57;cid=testClusterID;nsid=1159754126;c=1689149929669), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:50,259 INFO [Listener at localhost/42867] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36645 2023-07-12 08:18:50,266 WARN [Listener at localhost/43935] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 08:18:50,361 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x849a79b527d5e22a: Processing first storage report for DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb from datanode c486b0df-5774-43d8-99d7-031dd7934dbb 2023-07-12 08:18:50,362 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x849a79b527d5e22a: from storage DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb node DatanodeRegistration(127.0.0.1:33003, datanodeUuid=c486b0df-5774-43d8-99d7-031dd7934dbb, infoPort=35919, infoSecurePort=0, ipcPort=43935, storageInfo=lv=-57;cid=testClusterID;nsid=1159754126;c=1689149929669), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:50,362 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x849a79b527d5e22a: Processing first storage report for DS-6319da6f-fffd-4c18-ae54-04180f3ece4e from datanode c486b0df-5774-43d8-99d7-031dd7934dbb 2023-07-12 08:18:50,362 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x849a79b527d5e22a: from storage DS-6319da6f-fffd-4c18-ae54-04180f3ece4e node DatanodeRegistration(127.0.0.1:33003, datanodeUuid=c486b0df-5774-43d8-99d7-031dd7934dbb, infoPort=35919, infoSecurePort=0, ipcPort=43935, storageInfo=lv=-57;cid=testClusterID;nsid=1159754126;c=1689149929669), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 08:18:50,371 DEBUG [Listener at localhost/43935] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e 2023-07-12 08:18:50,373 INFO [Listener at localhost/43935] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/zookeeper_0, clientPort=54034, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 08:18:50,374 INFO [Listener at localhost/43935] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54034 2023-07-12 08:18:50,374 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:50,375 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:50,393 INFO [Listener at localhost/43935] util.FSUtils(471): Created version file at hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a with version=8 2023-07-12 08:18:50,393 INFO [Listener at localhost/43935] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42813/user/jenkins/test-data/11126a0e-1f3a-a834-1a0d-434a8e0dc4a9/hbase-staging 2023-07-12 08:18:50,394 DEBUG [Listener at localhost/43935] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 08:18:50,394 DEBUG [Listener at localhost/43935] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 08:18:50,394 DEBUG [Listener at localhost/43935] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 08:18:50,395 DEBUG [Listener at localhost/43935] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 08:18:50,396 INFO [Listener at localhost/43935] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:50,396 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,396 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,396 INFO [Listener at localhost/43935] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:50,396 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,396 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:50,396 INFO [Listener at localhost/43935] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:50,397 INFO [Listener at localhost/43935] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36711 2023-07-12 08:18:50,398 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:50,399 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:50,399 INFO [Listener at localhost/43935] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36711 connecting to ZooKeeper ensemble=127.0.0.1:54034 2023-07-12 08:18:50,406 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:367110x0, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:50,407 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36711-0x101589cfe880000 connected 2023-07-12 08:18:50,426 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:50,426 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:50,426 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:50,427 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36711 2023-07-12 08:18:50,427 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36711 2023-07-12 08:18:50,427 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36711 2023-07-12 08:18:50,428 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36711 2023-07-12 08:18:50,428 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36711 2023-07-12 08:18:50,430 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:50,430 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:50,430 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:50,431 INFO [Listener at localhost/43935] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 08:18:50,431 INFO [Listener at localhost/43935] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:50,431 INFO [Listener at localhost/43935] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:50,431 INFO [Listener at localhost/43935] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:50,432 INFO [Listener at localhost/43935] http.HttpServer(1146): Jetty bound to port 39521 2023-07-12 08:18:50,432 INFO [Listener at localhost/43935] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:50,433 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,433 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@69823956{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:50,434 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,434 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7057b85a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:50,440 INFO [Listener at localhost/43935] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:50,441 INFO [Listener at localhost/43935] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:50,441 INFO [Listener at localhost/43935] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:50,441 INFO [Listener at localhost/43935] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 08:18:50,442 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,443 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@12396c25{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 08:18:50,444 INFO [Listener at localhost/43935] server.AbstractConnector(333): Started ServerConnector@67d7b2b8{HTTP/1.1, (http/1.1)}{0.0.0.0:39521} 2023-07-12 08:18:50,445 INFO [Listener at localhost/43935] server.Server(415): Started @41002ms 2023-07-12 08:18:50,445 INFO [Listener at localhost/43935] master.HMaster(444): hbase.rootdir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a, hbase.cluster.distributed=false 2023-07-12 08:18:50,457 INFO [Listener at localhost/43935] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:50,458 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,458 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,458 INFO [Listener at localhost/43935] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:50,458 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,458 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:50,458 INFO [Listener at localhost/43935] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:50,459 INFO [Listener at localhost/43935] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44351 2023-07-12 08:18:50,460 INFO [Listener at localhost/43935] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 08:18:50,461 DEBUG [Listener at localhost/43935] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 08:18:50,461 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:50,462 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:50,463 INFO [Listener at localhost/43935] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44351 connecting to ZooKeeper ensemble=127.0.0.1:54034 2023-07-12 08:18:50,467 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:443510x0, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:50,469 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44351-0x101589cfe880001 connected 2023-07-12 08:18:50,469 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:50,469 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:50,470 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:50,474 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44351 2023-07-12 08:18:50,474 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44351 2023-07-12 08:18:50,475 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44351 2023-07-12 08:18:50,476 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44351 2023-07-12 08:18:50,477 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44351 2023-07-12 08:18:50,479 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:50,479 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:50,479 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:50,479 INFO [Listener at localhost/43935] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 08:18:50,479 INFO [Listener at localhost/43935] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:50,479 INFO [Listener at localhost/43935] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:50,480 INFO [Listener at localhost/43935] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:50,480 INFO [Listener at localhost/43935] http.HttpServer(1146): Jetty bound to port 34503 2023-07-12 08:18:50,480 INFO [Listener at localhost/43935] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:50,483 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,483 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39bfefc6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:50,483 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,484 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ea72a52{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:50,488 INFO [Listener at localhost/43935] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:50,489 INFO [Listener at localhost/43935] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:50,489 INFO [Listener at localhost/43935] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:50,489 INFO [Listener at localhost/43935] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 08:18:50,490 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,490 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1f408f01{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:50,491 INFO [Listener at localhost/43935] server.AbstractConnector(333): Started ServerConnector@1c0eb1ab{HTTP/1.1, (http/1.1)}{0.0.0.0:34503} 2023-07-12 08:18:50,491 INFO [Listener at localhost/43935] server.Server(415): Started @41048ms 2023-07-12 08:18:50,502 INFO [Listener at localhost/43935] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:50,502 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,503 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,503 INFO [Listener at localhost/43935] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:50,503 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,503 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:50,503 INFO [Listener at localhost/43935] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:50,504 INFO [Listener at localhost/43935] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33557 2023-07-12 08:18:50,504 INFO [Listener at localhost/43935] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 08:18:50,505 DEBUG [Listener at localhost/43935] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 08:18:50,506 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:50,507 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:50,508 INFO [Listener at localhost/43935] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33557 connecting to ZooKeeper ensemble=127.0.0.1:54034 2023-07-12 08:18:50,512 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:335570x0, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:50,513 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33557-0x101589cfe880002 connected 2023-07-12 08:18:50,513 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:50,514 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:50,514 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:50,515 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33557 2023-07-12 08:18:50,515 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33557 2023-07-12 08:18:50,518 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33557 2023-07-12 08:18:50,518 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33557 2023-07-12 08:18:50,518 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33557 2023-07-12 08:18:50,520 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:50,520 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:50,520 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:50,520 INFO [Listener at localhost/43935] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 08:18:50,520 INFO [Listener at localhost/43935] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:50,520 INFO [Listener at localhost/43935] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:50,520 INFO [Listener at localhost/43935] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:50,521 INFO [Listener at localhost/43935] http.HttpServer(1146): Jetty bound to port 34673 2023-07-12 08:18:50,521 INFO [Listener at localhost/43935] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:50,523 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,523 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ceb7e75{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:50,523 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,524 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41412b54{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:50,528 INFO [Listener at localhost/43935] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:50,528 INFO [Listener at localhost/43935] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:50,528 INFO [Listener at localhost/43935] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:50,528 INFO [Listener at localhost/43935] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 08:18:50,531 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,531 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3117de2f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:50,533 INFO [Listener at localhost/43935] server.AbstractConnector(333): Started ServerConnector@33f3befb{HTTP/1.1, (http/1.1)}{0.0.0.0:34673} 2023-07-12 08:18:50,533 INFO [Listener at localhost/43935] server.Server(415): Started @41090ms 2023-07-12 08:18:50,544 INFO [Listener at localhost/43935] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:50,544 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,544 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,544 INFO [Listener at localhost/43935] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:50,544 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:50,544 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:50,544 INFO [Listener at localhost/43935] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:50,545 INFO [Listener at localhost/43935] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39103 2023-07-12 08:18:50,546 INFO [Listener at localhost/43935] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 08:18:50,548 DEBUG [Listener at localhost/43935] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 08:18:50,549 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:50,550 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:50,550 INFO [Listener at localhost/43935] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39103 connecting to ZooKeeper ensemble=127.0.0.1:54034 2023-07-12 08:18:50,553 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:391030x0, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:50,555 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39103-0x101589cfe880003 connected 2023-07-12 08:18:50,555 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:50,555 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:50,555 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:50,558 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39103 2023-07-12 08:18:50,559 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39103 2023-07-12 08:18:50,560 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39103 2023-07-12 08:18:50,563 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39103 2023-07-12 08:18:50,563 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39103 2023-07-12 08:18:50,564 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:50,564 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:50,564 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:50,565 INFO [Listener at localhost/43935] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 08:18:50,565 INFO [Listener at localhost/43935] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:50,565 INFO [Listener at localhost/43935] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:50,565 INFO [Listener at localhost/43935] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:50,565 INFO [Listener at localhost/43935] http.HttpServer(1146): Jetty bound to port 35961 2023-07-12 08:18:50,566 INFO [Listener at localhost/43935] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:50,569 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,569 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@15be34c0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:50,569 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,569 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6d4375e7{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:50,574 INFO [Listener at localhost/43935] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:50,575 INFO [Listener at localhost/43935] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:50,575 INFO [Listener at localhost/43935] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:50,575 INFO [Listener at localhost/43935] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 08:18:50,576 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:50,577 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3720103e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:50,578 INFO [Listener at localhost/43935] server.AbstractConnector(333): Started ServerConnector@48d26a39{HTTP/1.1, (http/1.1)}{0.0.0.0:35961} 2023-07-12 08:18:50,579 INFO [Listener at localhost/43935] server.Server(415): Started @41135ms 2023-07-12 08:18:50,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:50,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@1805bf9b{HTTP/1.1, (http/1.1)}{0.0.0.0:44979} 2023-07-12 08:18:50,587 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @41144ms 2023-07-12 08:18:50,587 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36711,1689149930395 2023-07-12 08:18:50,589 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 08:18:50,590 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36711,1689149930395 2023-07-12 08:18:50,591 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:50,591 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:50,591 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:50,591 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:50,592 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:50,592 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 08:18:50,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36711,1689149930395 from backup master directory 2023-07-12 08:18:50,594 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 08:18:50,595 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36711,1689149930395 2023-07-12 08:18:50,595 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 08:18:50,595 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:50,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36711,1689149930395 2023-07-12 08:18:50,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/hbase.id with ID: 97047a5b-1252-47d6-beab-37d33744b315 2023-07-12 08:18:50,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:50,626 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:50,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4ee9282e to 127.0.0.1:54034 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:50,647 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@264fbd43, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:50,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:50,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 08:18:50,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:50,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/data/master/store-tmp 2023-07-12 08:18:50,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:50,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 08:18:50,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:50,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:50,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 08:18:50,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:50,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:50,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 08:18:50,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/WALs/jenkins-hbase4.apache.org,36711,1689149930395 2023-07-12 08:18:50,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36711%2C1689149930395, suffix=, logDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/WALs/jenkins-hbase4.apache.org,36711,1689149930395, archiveDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/oldWALs, maxLogs=10 2023-07-12 08:18:50,682 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK] 2023-07-12 08:18:50,683 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK] 2023-07-12 08:18:50,683 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK] 2023-07-12 08:18:50,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/WALs/jenkins-hbase4.apache.org,36711,1689149930395/jenkins-hbase4.apache.org%2C36711%2C1689149930395.1689149930665 2023-07-12 08:18:50,689 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK], DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK]] 2023-07-12 08:18:50,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:50,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:50,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:50,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:50,693 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:50,694 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 08:18:50,695 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 08:18:50,695 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:50,696 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:50,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:50,699 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 08:18:50,701 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:50,701 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9712123360, jitterRate=-0.09548802673816681}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:50,701 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 08:18:50,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 08:18:50,704 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 08:18:50,704 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 08:18:50,704 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 08:18:50,705 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 08:18:50,705 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 08:18:50,705 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 08:18:50,707 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 08:18:50,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 08:18:50,709 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 08:18:50,709 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 08:18:50,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 08:18:50,737 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:50,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 08:18:50,738 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 08:18:50,739 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 08:18:50,740 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:50,740 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:50,740 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:50,740 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:50,740 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:50,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36711,1689149930395, sessionid=0x101589cfe880000, setting cluster-up flag (Was=false) 2023-07-12 08:18:50,747 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:50,752 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 08:18:50,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36711,1689149930395 2023-07-12 08:18:50,757 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:50,762 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 08:18:50,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36711,1689149930395 2023-07-12 08:18:50,763 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.hbase-snapshot/.tmp 2023-07-12 08:18:50,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 08:18:50,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 08:18:50,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 08:18:50,766 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:50,766 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 08:18:50,767 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 08:18:50,777 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 08:18:50,778 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 08:18:50,778 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 08:18:50,778 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 08:18:50,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:50,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:50,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:50,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 08:18:50,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-12 08:18:50,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:50,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,780 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689149960780 2023-07-12 08:18:50,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 08:18:50,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 08:18:50,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 08:18:50,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 08:18:50,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 08:18:50,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 08:18:50,781 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 08:18:50,781 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 08:18:50,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,782 INFO [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(951): ClusterId : 97047a5b-1252-47d6-beab-37d33744b315 2023-07-12 08:18:50,782 DEBUG [RS:0;jenkins-hbase4:44351] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 08:18:50,782 INFO [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(951): ClusterId : 97047a5b-1252-47d6-beab-37d33744b315 2023-07-12 08:18:50,782 INFO [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(951): ClusterId : 97047a5b-1252-47d6-beab-37d33744b315 2023-07-12 08:18:50,782 DEBUG [RS:1;jenkins-hbase4:33557] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 08:18:50,782 DEBUG [RS:2;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 08:18:50,783 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:50,784 DEBUG [RS:0;jenkins-hbase4:44351] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 08:18:50,784 DEBUG [RS:0;jenkins-hbase4:44351] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 08:18:50,785 DEBUG [RS:1;jenkins-hbase4:33557] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 08:18:50,785 DEBUG [RS:1;jenkins-hbase4:33557] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 08:18:50,785 DEBUG [RS:2;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 08:18:50,785 DEBUG [RS:2;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 08:18:50,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 08:18:50,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 08:18:50,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 08:18:50,788 DEBUG [RS:0;jenkins-hbase4:44351] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 08:18:50,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 08:18:50,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 08:18:50,789 DEBUG [RS:0;jenkins-hbase4:44351] zookeeper.ReadOnlyZKClient(139): Connect 0x2e3df672 to 127.0.0.1:54034 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:50,790 DEBUG [RS:2;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 08:18:50,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689149930789,5,FailOnTimeoutGroup] 2023-07-12 08:18:50,790 DEBUG [RS:1;jenkins-hbase4:33557] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 08:18:50,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689149930790,5,FailOnTimeoutGroup] 2023-07-12 08:18:50,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 08:18:50,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,793 DEBUG [RS:1;jenkins-hbase4:33557] zookeeper.ReadOnlyZKClient(139): Connect 0x3364c31e to 127.0.0.1:54034 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:50,793 DEBUG [RS:2;jenkins-hbase4:39103] zookeeper.ReadOnlyZKClient(139): Connect 0x1689d202 to 127.0.0.1:54034 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:50,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,803 DEBUG [RS:0;jenkins-hbase4:44351] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@20fa0dc5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:50,804 DEBUG [RS:0;jenkins-hbase4:44351] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24c59d02, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:50,811 DEBUG [RS:1;jenkins-hbase4:33557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@76664b16, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:50,811 DEBUG [RS:2;jenkins-hbase4:39103] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5651cda1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:50,811 DEBUG [RS:1;jenkins-hbase4:33557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ae5426c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:50,811 DEBUG [RS:2;jenkins-hbase4:39103] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2755a242, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:50,813 DEBUG [RS:0;jenkins-hbase4:44351] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44351 2023-07-12 08:18:50,813 INFO [RS:0;jenkins-hbase4:44351] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 08:18:50,813 INFO [RS:0;jenkins-hbase4:44351] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 08:18:50,813 DEBUG [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 08:18:50,814 INFO [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36711,1689149930395 with isa=jenkins-hbase4.apache.org/172.31.14.131:44351, startcode=1689149930457 2023-07-12 08:18:50,814 DEBUG [RS:0;jenkins-hbase4:44351] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 08:18:50,817 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37931, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 08:18:50,817 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:50,819 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36711] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:50,819 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:50,819 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 08:18:50,820 DEBUG [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a 2023-07-12 08:18:50,820 DEBUG [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41445 2023-07-12 08:18:50,820 DEBUG [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39521 2023-07-12 08:18:50,820 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:50,820 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a 2023-07-12 08:18:50,821 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:50,821 DEBUG [RS:0;jenkins-hbase4:44351] zookeeper.ZKUtil(162): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:50,822 WARN [RS:0;jenkins-hbase4:44351] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:50,822 INFO [RS:0;jenkins-hbase4:44351] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:50,822 DEBUG [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:50,824 DEBUG [RS:1;jenkins-hbase4:33557] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:33557 2023-07-12 08:18:50,824 INFO [RS:1;jenkins-hbase4:33557] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 08:18:50,824 INFO [RS:1;jenkins-hbase4:33557] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 08:18:50,824 DEBUG [RS:2;jenkins-hbase4:39103] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:39103 2023-07-12 08:18:50,824 DEBUG [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 08:18:50,824 INFO [RS:2;jenkins-hbase4:39103] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 08:18:50,824 INFO [RS:2;jenkins-hbase4:39103] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 08:18:50,824 DEBUG [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 08:18:50,825 INFO [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36711,1689149930395 with isa=jenkins-hbase4.apache.org/172.31.14.131:33557, startcode=1689149930502 2023-07-12 08:18:50,825 INFO [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36711,1689149930395 with isa=jenkins-hbase4.apache.org/172.31.14.131:39103, startcode=1689149930544 2023-07-12 08:18:50,825 DEBUG [RS:1;jenkins-hbase4:33557] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 08:18:50,825 DEBUG [RS:2;jenkins-hbase4:39103] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 08:18:50,833 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44351,1689149930457] 2023-07-12 08:18:50,833 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59951, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 08:18:50,833 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48969, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 08:18:50,834 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36711] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:50,834 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:50,834 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 08:18:50,834 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36711] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:50,834 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:50,834 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 08:18:50,834 DEBUG [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a 2023-07-12 08:18:50,835 DEBUG [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41445 2023-07-12 08:18:50,835 DEBUG [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a 2023-07-12 08:18:50,835 DEBUG [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39521 2023-07-12 08:18:50,835 DEBUG [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41445 2023-07-12 08:18:50,835 DEBUG [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39521 2023-07-12 08:18:50,839 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:50,839 DEBUG [RS:0;jenkins-hbase4:44351] zookeeper.ZKUtil(162): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:50,839 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:50,840 DEBUG [RS:1;jenkins-hbase4:33557] zookeeper.ZKUtil(162): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:50,840 WARN [RS:1;jenkins-hbase4:33557] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:50,840 DEBUG [RS:2;jenkins-hbase4:39103] zookeeper.ZKUtil(162): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:50,840 INFO [RS:1;jenkins-hbase4:33557] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:50,840 WARN [RS:2;jenkins-hbase4:39103] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:50,840 DEBUG [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:50,840 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33557,1689149930502] 2023-07-12 08:18:50,840 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39103,1689149930544] 2023-07-12 08:18:50,840 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:50,840 INFO [RS:2;jenkins-hbase4:39103] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:50,840 DEBUG [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:50,840 DEBUG [RS:0;jenkins-hbase4:44351] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 08:18:50,841 INFO [RS:0;jenkins-hbase4:44351] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 08:18:50,841 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:50,841 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:50,845 INFO [RS:0;jenkins-hbase4:44351] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 08:18:50,845 INFO [RS:0;jenkins-hbase4:44351] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 08:18:50,845 INFO [RS:0;jenkins-hbase4:44351] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,848 INFO [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 08:18:50,850 INFO [RS:0;jenkins-hbase4:44351] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,850 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:50,850 DEBUG [RS:0;jenkins-hbase4:44351] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,850 DEBUG [RS:0;jenkins-hbase4:44351] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,850 DEBUG [RS:0;jenkins-hbase4:44351] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,850 DEBUG [RS:0;jenkins-hbase4:44351] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,850 DEBUG [RS:0;jenkins-hbase4:44351] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,850 DEBUG [RS:0;jenkins-hbase4:44351] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:50,850 DEBUG [RS:0;jenkins-hbase4:44351] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,850 DEBUG [RS:0;jenkins-hbase4:44351] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,850 DEBUG [RS:0;jenkins-hbase4:44351] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,850 DEBUG [RS:0;jenkins-hbase4:44351] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,851 DEBUG [RS:2;jenkins-hbase4:39103] zookeeper.ZKUtil(162): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:50,851 DEBUG [RS:1;jenkins-hbase4:33557] zookeeper.ZKUtil(162): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:50,851 DEBUG [RS:2;jenkins-hbase4:39103] zookeeper.ZKUtil(162): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:50,852 DEBUG [RS:1;jenkins-hbase4:33557] zookeeper.ZKUtil(162): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:50,852 DEBUG [RS:2;jenkins-hbase4:39103] zookeeper.ZKUtil(162): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:50,852 DEBUG [RS:1;jenkins-hbase4:33557] zookeeper.ZKUtil(162): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:50,852 DEBUG [RS:2;jenkins-hbase4:39103] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 08:18:50,853 INFO [RS:2;jenkins-hbase4:39103] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 08:18:50,853 DEBUG [RS:1;jenkins-hbase4:33557] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 08:18:50,854 INFO [RS:1;jenkins-hbase4:33557] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 08:18:50,854 INFO [RS:0;jenkins-hbase4:44351] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,855 INFO [RS:0;jenkins-hbase4:44351] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,855 INFO [RS:2;jenkins-hbase4:39103] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 08:18:50,855 INFO [RS:0;jenkins-hbase4:44351] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,855 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 08:18:50,855 INFO [RS:1;jenkins-hbase4:33557] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 08:18:50,856 INFO [RS:2;jenkins-hbase4:39103] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 08:18:50,856 INFO [RS:2;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,857 INFO [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 08:18:50,857 INFO [RS:1;jenkins-hbase4:33557] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 08:18:50,857 INFO [RS:1;jenkins-hbase4:33557] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,858 INFO [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 08:18:50,858 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/info 2023-07-12 08:18:50,858 INFO [RS:2;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,859 DEBUG [RS:2;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,859 DEBUG [RS:2;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,860 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 08:18:50,860 DEBUG [RS:2;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,860 INFO [RS:1;jenkins-hbase4:33557] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,860 DEBUG [RS:1;jenkins-hbase4:33557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,860 DEBUG [RS:2;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,861 DEBUG [RS:2;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,861 DEBUG [RS:2;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:50,861 DEBUG [RS:2;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,860 DEBUG [RS:1;jenkins-hbase4:33557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,861 DEBUG [RS:2;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,861 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:50,861 DEBUG [RS:2;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,861 DEBUG [RS:1;jenkins-hbase4:33557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,861 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 08:18:50,861 DEBUG [RS:2;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,861 DEBUG [RS:1;jenkins-hbase4:33557] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,861 DEBUG [RS:1;jenkins-hbase4:33557] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,861 DEBUG [RS:1;jenkins-hbase4:33557] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:50,861 DEBUG [RS:1;jenkins-hbase4:33557] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,861 DEBUG [RS:1;jenkins-hbase4:33557] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,862 DEBUG [RS:1;jenkins-hbase4:33557] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,862 DEBUG [RS:1;jenkins-hbase4:33557] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:50,862 INFO [RS:2;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,862 INFO [RS:2;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,862 INFO [RS:2;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,870 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/rep_barrier 2023-07-12 08:18:50,870 INFO [RS:1;jenkins-hbase4:33557] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,871 INFO [RS:1;jenkins-hbase4:33557] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,871 INFO [RS:1;jenkins-hbase4:33557] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,871 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 08:18:50,875 INFO [RS:0;jenkins-hbase4:44351] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 08:18:50,875 INFO [RS:0;jenkins-hbase4:44351] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44351,1689149930457-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,875 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:50,876 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 08:18:50,877 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/table 2023-07-12 08:18:50,878 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 08:18:50,878 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:50,880 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740 2023-07-12 08:18:50,881 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740 2023-07-12 08:18:50,881 INFO [RS:2;jenkins-hbase4:39103] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 08:18:50,882 INFO [RS:2;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39103,1689149930544-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,883 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 08:18:50,885 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 08:18:50,888 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:50,888 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9987726880, jitterRate=-0.06982044875621796}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 08:18:50,888 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 08:18:50,888 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 08:18:50,889 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 08:18:50,889 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 08:18:50,889 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 08:18:50,889 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 08:18:50,889 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 08:18:50,889 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 08:18:50,890 INFO [RS:1;jenkins-hbase4:33557] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 08:18:50,890 INFO [RS:1;jenkins-hbase4:33557] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33557,1689149930502-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:50,890 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 08:18:50,890 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 08:18:50,890 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 08:18:50,891 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 08:18:50,895 INFO [RS:0;jenkins-hbase4:44351] regionserver.Replication(203): jenkins-hbase4.apache.org,44351,1689149930457 started 2023-07-12 08:18:50,895 INFO [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44351,1689149930457, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44351, sessionid=0x101589cfe880001 2023-07-12 08:18:50,895 DEBUG [RS:0;jenkins-hbase4:44351] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 08:18:50,895 DEBUG [RS:0;jenkins-hbase4:44351] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:50,895 DEBUG [RS:0;jenkins-hbase4:44351] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44351,1689149930457' 2023-07-12 08:18:50,895 DEBUG [RS:0;jenkins-hbase4:44351] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 08:18:50,896 DEBUG [RS:0;jenkins-hbase4:44351] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 08:18:50,896 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 08:18:50,896 DEBUG [RS:0;jenkins-hbase4:44351] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 08:18:50,896 DEBUG [RS:0;jenkins-hbase4:44351] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 08:18:50,896 DEBUG [RS:0;jenkins-hbase4:44351] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:50,896 DEBUG [RS:0;jenkins-hbase4:44351] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44351,1689149930457' 2023-07-12 08:18:50,897 DEBUG [RS:0;jenkins-hbase4:44351] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 08:18:50,897 DEBUG [RS:0;jenkins-hbase4:44351] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 08:18:50,898 DEBUG [RS:0;jenkins-hbase4:44351] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 08:18:50,898 INFO [RS:0;jenkins-hbase4:44351] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 08:18:50,898 INFO [RS:0;jenkins-hbase4:44351] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 08:18:50,903 INFO [RS:2;jenkins-hbase4:39103] regionserver.Replication(203): jenkins-hbase4.apache.org,39103,1689149930544 started 2023-07-12 08:18:50,903 INFO [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39103,1689149930544, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39103, sessionid=0x101589cfe880003 2023-07-12 08:18:50,903 DEBUG [RS:2;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 08:18:50,903 DEBUG [RS:2;jenkins-hbase4:39103] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:50,903 DEBUG [RS:2;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39103,1689149930544' 2023-07-12 08:18:50,903 DEBUG [RS:2;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 08:18:50,904 DEBUG [RS:2;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 08:18:50,904 DEBUG [RS:2;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 08:18:50,904 DEBUG [RS:2;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 08:18:50,904 DEBUG [RS:2;jenkins-hbase4:39103] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:50,904 DEBUG [RS:2;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39103,1689149930544' 2023-07-12 08:18:50,904 DEBUG [RS:2;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 08:18:50,905 DEBUG [RS:2;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 08:18:50,905 DEBUG [RS:2;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 08:18:50,905 INFO [RS:2;jenkins-hbase4:39103] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 08:18:50,905 INFO [RS:2;jenkins-hbase4:39103] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 08:18:50,909 INFO [RS:1;jenkins-hbase4:33557] regionserver.Replication(203): jenkins-hbase4.apache.org,33557,1689149930502 started 2023-07-12 08:18:50,909 INFO [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33557,1689149930502, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33557, sessionid=0x101589cfe880002 2023-07-12 08:18:50,909 DEBUG [RS:1;jenkins-hbase4:33557] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 08:18:50,909 DEBUG [RS:1;jenkins-hbase4:33557] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:50,909 DEBUG [RS:1;jenkins-hbase4:33557] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33557,1689149930502' 2023-07-12 08:18:50,909 DEBUG [RS:1;jenkins-hbase4:33557] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 08:18:50,909 DEBUG [RS:1;jenkins-hbase4:33557] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 08:18:50,909 DEBUG [RS:1;jenkins-hbase4:33557] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 08:18:50,909 DEBUG [RS:1;jenkins-hbase4:33557] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 08:18:50,909 DEBUG [RS:1;jenkins-hbase4:33557] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:50,909 DEBUG [RS:1;jenkins-hbase4:33557] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33557,1689149930502' 2023-07-12 08:18:50,909 DEBUG [RS:1;jenkins-hbase4:33557] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 08:18:50,910 DEBUG [RS:1;jenkins-hbase4:33557] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 08:18:50,910 DEBUG [RS:1;jenkins-hbase4:33557] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 08:18:50,910 INFO [RS:1;jenkins-hbase4:33557] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 08:18:50,910 INFO [RS:1;jenkins-hbase4:33557] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 08:18:51,000 INFO [RS:0;jenkins-hbase4:44351] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44351%2C1689149930457, suffix=, logDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,44351,1689149930457, archiveDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/oldWALs, maxLogs=32 2023-07-12 08:18:51,007 INFO [RS:2;jenkins-hbase4:39103] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39103%2C1689149930544, suffix=, logDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,39103,1689149930544, archiveDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/oldWALs, maxLogs=32 2023-07-12 08:18:51,013 INFO [RS:1;jenkins-hbase4:33557] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33557%2C1689149930502, suffix=, logDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,33557,1689149930502, archiveDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/oldWALs, maxLogs=32 2023-07-12 08:18:51,020 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK] 2023-07-12 08:18:51,020 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK] 2023-07-12 08:18:51,021 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK] 2023-07-12 08:18:51,027 INFO [RS:0;jenkins-hbase4:44351] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,44351,1689149930457/jenkins-hbase4.apache.org%2C44351%2C1689149930457.1689149931000 2023-07-12 08:18:51,028 DEBUG [RS:0;jenkins-hbase4:44351] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK], DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK]] 2023-07-12 08:18:51,033 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK] 2023-07-12 08:18:51,033 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK] 2023-07-12 08:18:51,034 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK] 2023-07-12 08:18:51,043 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK] 2023-07-12 08:18:51,043 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK] 2023-07-12 08:18:51,043 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK] 2023-07-12 08:18:51,043 INFO [RS:2;jenkins-hbase4:39103] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,39103,1689149930544/jenkins-hbase4.apache.org%2C39103%2C1689149930544.1689149931007 2023-07-12 08:18:51,046 DEBUG [jenkins-hbase4:36711] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 08:18:51,046 DEBUG [RS:2;jenkins-hbase4:39103] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK], DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK]] 2023-07-12 08:18:51,047 DEBUG [jenkins-hbase4:36711] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:51,047 DEBUG [jenkins-hbase4:36711] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:51,047 DEBUG [jenkins-hbase4:36711] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:51,047 DEBUG [jenkins-hbase4:36711] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:51,047 DEBUG [jenkins-hbase4:36711] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:51,048 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33557,1689149930502, state=OPENING 2023-07-12 08:18:51,048 INFO [RS:1;jenkins-hbase4:33557] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,33557,1689149930502/jenkins-hbase4.apache.org%2C33557%2C1689149930502.1689149931013 2023-07-12 08:18:51,048 DEBUG [RS:1;jenkins-hbase4:33557] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK], DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK]] 2023-07-12 08:18:51,049 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 08:18:51,050 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:51,050 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33557,1689149930502}] 2023-07-12 08:18:51,050 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 08:18:51,073 WARN [ReadOnlyZKClient-127.0.0.1:54034@0x4ee9282e] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 08:18:51,074 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36711,1689149930395] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:51,075 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59468, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:51,076 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33557] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:59468 deadline: 1689149991076, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:51,204 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:51,205 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:51,207 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59480, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:51,211 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 08:18:51,211 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:51,213 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33557%2C1689149930502.meta, suffix=.meta, logDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,33557,1689149930502, archiveDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/oldWALs, maxLogs=32 2023-07-12 08:18:51,228 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK] 2023-07-12 08:18:51,229 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK] 2023-07-12 08:18:51,229 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK] 2023-07-12 08:18:51,231 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,33557,1689149930502/jenkins-hbase4.apache.org%2C33557%2C1689149930502.meta.1689149931213.meta 2023-07-12 08:18:51,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK], DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK]] 2023-07-12 08:18:51,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:51,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 08:18:51,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 08:18:51,232 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 08:18:51,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 08:18:51,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:51,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 08:18:51,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 08:18:51,234 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 08:18:51,235 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/info 2023-07-12 08:18:51,235 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/info 2023-07-12 08:18:51,235 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 08:18:51,236 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:51,236 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 08:18:51,236 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/rep_barrier 2023-07-12 08:18:51,236 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/rep_barrier 2023-07-12 08:18:51,237 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 08:18:51,237 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:51,237 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 08:18:51,238 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/table 2023-07-12 08:18:51,238 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/table 2023-07-12 08:18:51,238 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 08:18:51,239 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:51,240 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740 2023-07-12 08:18:51,240 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740 2023-07-12 08:18:51,242 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 08:18:51,243 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 08:18:51,244 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11173023680, jitterRate=0.040568917989730835}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 08:18:51,244 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 08:18:51,245 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689149931204 2023-07-12 08:18:51,249 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 08:18:51,250 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 08:18:51,250 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33557,1689149930502, state=OPEN 2023-07-12 08:18:51,252 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 08:18:51,252 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 08:18:51,253 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 08:18:51,253 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33557,1689149930502 in 202 msec 2023-07-12 08:18:51,255 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 08:18:51,255 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 363 msec 2023-07-12 08:18:51,256 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 489 msec 2023-07-12 08:18:51,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689149931256, completionTime=-1 2023-07-12 08:18:51,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 08:18:51,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 08:18:51,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 08:18:51,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689149991260 2023-07-12 08:18:51,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689150051260 2023-07-12 08:18:51,260 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-12 08:18:51,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36711,1689149930395-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:51,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36711,1689149930395-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:51,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36711,1689149930395-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:51,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36711, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:51,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:51,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 08:18:51,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:51,267 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 08:18:51,268 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 08:18:51,268 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:51,269 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:51,270 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a 2023-07-12 08:18:51,271 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a empty. 2023-07-12 08:18:51,271 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a 2023-07-12 08:18:51,271 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 08:18:51,284 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:51,285 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => cd50f667456c8de6113c503bce79a76a, NAME => 'hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp 2023-07-12 08:18:51,298 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:51,298 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing cd50f667456c8de6113c503bce79a76a, disabling compactions & flushes 2023-07-12 08:18:51,298 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:51,298 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:51,298 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. after waiting 0 ms 2023-07-12 08:18:51,298 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:51,298 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:51,298 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for cd50f667456c8de6113c503bce79a76a: 2023-07-12 08:18:51,300 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:51,301 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149931301"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149931301"}]},"ts":"1689149931301"} 2023-07-12 08:18:51,303 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:51,304 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:51,304 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149931304"}]},"ts":"1689149931304"} 2023-07-12 08:18:51,305 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 08:18:51,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:51,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:51,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:51,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:51,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:51,308 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cd50f667456c8de6113c503bce79a76a, ASSIGN}] 2023-07-12 08:18:51,310 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cd50f667456c8de6113c503bce79a76a, ASSIGN 2023-07-12 08:18:51,310 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=cd50f667456c8de6113c503bce79a76a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39103,1689149930544; forceNewPlan=false, retain=false 2023-07-12 08:18:51,378 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36711,1689149930395] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:51,380 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36711,1689149930395] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 08:18:51,381 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:51,382 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:51,383 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0 2023-07-12 08:18:51,384 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0 empty. 2023-07-12 08:18:51,384 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0 2023-07-12 08:18:51,385 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 08:18:51,402 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:51,403 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => e32c68350c776569b0b9e6b278ff09a0, NAME => 'hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp 2023-07-12 08:18:51,411 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:51,411 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing e32c68350c776569b0b9e6b278ff09a0, disabling compactions & flushes 2023-07-12 08:18:51,411 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:51,411 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:51,411 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. after waiting 0 ms 2023-07-12 08:18:51,411 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:51,411 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:51,411 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for e32c68350c776569b0b9e6b278ff09a0: 2023-07-12 08:18:51,413 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:51,414 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149931414"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149931414"}]},"ts":"1689149931414"} 2023-07-12 08:18:51,415 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:51,416 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:51,416 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149931416"}]},"ts":"1689149931416"} 2023-07-12 08:18:51,417 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 08:18:51,420 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:51,421 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:51,421 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:51,421 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:51,421 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:51,421 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e32c68350c776569b0b9e6b278ff09a0, ASSIGN}] 2023-07-12 08:18:51,422 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e32c68350c776569b0b9e6b278ff09a0, ASSIGN 2023-07-12 08:18:51,422 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e32c68350c776569b0b9e6b278ff09a0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44351,1689149930457; forceNewPlan=false, retain=false 2023-07-12 08:18:51,423 INFO [jenkins-hbase4:36711] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 08:18:51,425 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cd50f667456c8de6113c503bce79a76a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:51,425 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149931425"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149931425"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149931425"}]},"ts":"1689149931425"} 2023-07-12 08:18:51,425 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e32c68350c776569b0b9e6b278ff09a0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:51,425 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149931425"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149931425"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149931425"}]},"ts":"1689149931425"} 2023-07-12 08:18:51,426 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure cd50f667456c8de6113c503bce79a76a, server=jenkins-hbase4.apache.org,39103,1689149930544}] 2023-07-12 08:18:51,427 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure e32c68350c776569b0b9e6b278ff09a0, server=jenkins-hbase4.apache.org,44351,1689149930457}] 2023-07-12 08:18:51,580 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:51,580 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:51,580 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:51,580 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 08:18:51,581 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38492, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:51,582 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58732, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 08:18:51,585 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:51,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e32c68350c776569b0b9e6b278ff09a0, NAME => 'hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:51,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 08:18:51,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. service=MultiRowMutationService 2023-07-12 08:18:51,586 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 08:18:51,586 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:51,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e32c68350c776569b0b9e6b278ff09a0 2023-07-12 08:18:51,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cd50f667456c8de6113c503bce79a76a, NAME => 'hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:51,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:51,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e32c68350c776569b0b9e6b278ff09a0 2023-07-12 08:18:51,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e32c68350c776569b0b9e6b278ff09a0 2023-07-12 08:18:51,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace cd50f667456c8de6113c503bce79a76a 2023-07-12 08:18:51,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:51,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cd50f667456c8de6113c503bce79a76a 2023-07-12 08:18:51,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cd50f667456c8de6113c503bce79a76a 2023-07-12 08:18:51,588 INFO [StoreOpener-e32c68350c776569b0b9e6b278ff09a0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e32c68350c776569b0b9e6b278ff09a0 2023-07-12 08:18:51,591 INFO [StoreOpener-cd50f667456c8de6113c503bce79a76a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cd50f667456c8de6113c503bce79a76a 2023-07-12 08:18:51,591 DEBUG [StoreOpener-e32c68350c776569b0b9e6b278ff09a0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0/m 2023-07-12 08:18:51,592 DEBUG [StoreOpener-e32c68350c776569b0b9e6b278ff09a0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0/m 2023-07-12 08:18:51,592 DEBUG [StoreOpener-cd50f667456c8de6113c503bce79a76a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a/info 2023-07-12 08:18:51,592 DEBUG [StoreOpener-cd50f667456c8de6113c503bce79a76a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a/info 2023-07-12 08:18:51,592 INFO [StoreOpener-e32c68350c776569b0b9e6b278ff09a0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e32c68350c776569b0b9e6b278ff09a0 columnFamilyName m 2023-07-12 08:18:51,592 INFO [StoreOpener-cd50f667456c8de6113c503bce79a76a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cd50f667456c8de6113c503bce79a76a columnFamilyName info 2023-07-12 08:18:51,592 INFO [StoreOpener-e32c68350c776569b0b9e6b278ff09a0-1] regionserver.HStore(310): Store=e32c68350c776569b0b9e6b278ff09a0/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:51,593 INFO [StoreOpener-cd50f667456c8de6113c503bce79a76a-1] regionserver.HStore(310): Store=cd50f667456c8de6113c503bce79a76a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:51,593 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0 2023-07-12 08:18:51,593 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a 2023-07-12 08:18:51,593 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0 2023-07-12 08:18:51,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a 2023-07-12 08:18:51,596 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cd50f667456c8de6113c503bce79a76a 2023-07-12 08:18:51,598 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e32c68350c776569b0b9e6b278ff09a0 2023-07-12 08:18:51,598 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:51,598 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cd50f667456c8de6113c503bce79a76a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11921481920, jitterRate=0.11027452349662781}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:51,598 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cd50f667456c8de6113c503bce79a76a: 2023-07-12 08:18:51,599 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a., pid=8, masterSystemTime=1689149931579 2023-07-12 08:18:51,601 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:51,602 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e32c68350c776569b0b9e6b278ff09a0; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3a67168, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:51,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e32c68350c776569b0b9e6b278ff09a0: 2023-07-12 08:18:51,603 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:51,604 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:51,604 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0., pid=9, masterSystemTime=1689149931580 2023-07-12 08:18:51,604 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cd50f667456c8de6113c503bce79a76a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:51,605 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689149931604"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149931604"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149931604"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149931604"}]},"ts":"1689149931604"} 2023-07-12 08:18:51,606 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:51,607 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:51,608 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e32c68350c776569b0b9e6b278ff09a0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:51,609 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689149931608"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149931608"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149931608"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149931608"}]},"ts":"1689149931608"} 2023-07-12 08:18:51,610 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-12 08:18:51,610 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure cd50f667456c8de6113c503bce79a76a, server=jenkins-hbase4.apache.org,39103,1689149930544 in 181 msec 2023-07-12 08:18:51,612 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 08:18:51,612 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-12 08:18:51,612 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure e32c68350c776569b0b9e6b278ff09a0, server=jenkins-hbase4.apache.org,44351,1689149930457 in 183 msec 2023-07-12 08:18:51,612 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=cd50f667456c8de6113c503bce79a76a, ASSIGN in 302 msec 2023-07-12 08:18:51,613 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:51,613 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149931613"}]},"ts":"1689149931613"} 2023-07-12 08:18:51,614 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-12 08:18:51,614 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e32c68350c776569b0b9e6b278ff09a0, ASSIGN in 191 msec 2023-07-12 08:18:51,614 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 08:18:51,615 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:51,615 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149931615"}]},"ts":"1689149931615"} 2023-07-12 08:18:51,616 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 08:18:51,617 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:51,618 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 351 msec 2023-07-12 08:18:51,619 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:51,620 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 240 msec 2023-07-12 08:18:51,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 08:18:51,671 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:51,671 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:51,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:51,676 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38494, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:51,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 08:18:51,683 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36711,1689149930395] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:51,686 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58748, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:51,689 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:51,692 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-07-12 08:18:51,692 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 08:18:51,692 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 08:18:51,697 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:51,697 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:51,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 08:18:51,703 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 08:18:51,704 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 08:18:51,708 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:51,711 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-07-12 08:18:51,724 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 08:18:51,726 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 08:18:51,726 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.131sec 2023-07-12 08:18:51,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 08:18:51,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 08:18:51,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 08:18:51,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36711,1689149930395-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 08:18:51,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36711,1689149930395-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 08:18:51,727 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 08:18:51,782 DEBUG [Listener at localhost/43935] zookeeper.ReadOnlyZKClient(139): Connect 0x7d87758a to 127.0.0.1:54034 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:51,789 DEBUG [Listener at localhost/43935] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@796cfeff, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:51,790 DEBUG [hconnection-0x4efaa00c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:51,792 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59484, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:51,794 INFO [Listener at localhost/43935] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36711,1689149930395 2023-07-12 08:18:51,794 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:51,796 DEBUG [Listener at localhost/43935] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 08:18:51,797 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45660, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 08:18:51,800 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 08:18:51,800 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:51,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-12 08:18:51,801 DEBUG [Listener at localhost/43935] zookeeper.ReadOnlyZKClient(139): Connect 0x1ecadc90 to 127.0.0.1:54034 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:51,807 DEBUG [Listener at localhost/43935] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@52be585, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:51,807 INFO [Listener at localhost/43935] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54034 2023-07-12 08:18:51,812 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:51,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:51,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:51,821 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101589cfe88000a connected 2023-07-12 08:18:51,824 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 08:18:51,835 INFO [Listener at localhost/43935] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 08:18:51,835 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:51,836 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:51,836 INFO [Listener at localhost/43935] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 08:18:51,836 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 08:18:51,836 INFO [Listener at localhost/43935] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 08:18:51,836 INFO [Listener at localhost/43935] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 08:18:51,836 INFO [Listener at localhost/43935] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40145 2023-07-12 08:18:51,837 INFO [Listener at localhost/43935] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 08:18:51,838 DEBUG [Listener at localhost/43935] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 08:18:51,838 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:51,839 INFO [Listener at localhost/43935] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 08:18:51,840 INFO [Listener at localhost/43935] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40145 connecting to ZooKeeper ensemble=127.0.0.1:54034 2023-07-12 08:18:51,844 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:401450x0, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 08:18:51,845 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(162): regionserver:401450x0, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 08:18:51,846 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40145-0x101589cfe88000b connected 2023-07-12 08:18:51,846 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(162): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 08:18:51,847 DEBUG [Listener at localhost/43935] zookeeper.ZKUtil(164): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 08:18:51,849 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40145 2023-07-12 08:18:51,850 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40145 2023-07-12 08:18:51,851 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40145 2023-07-12 08:18:51,853 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40145 2023-07-12 08:18:51,853 DEBUG [Listener at localhost/43935] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40145 2023-07-12 08:18:51,855 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 08:18:51,855 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 08:18:51,855 INFO [Listener at localhost/43935] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 08:18:51,856 INFO [Listener at localhost/43935] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 08:18:51,856 INFO [Listener at localhost/43935] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 08:18:51,856 INFO [Listener at localhost/43935] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 08:18:51,856 INFO [Listener at localhost/43935] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 08:18:51,857 INFO [Listener at localhost/43935] http.HttpServer(1146): Jetty bound to port 45337 2023-07-12 08:18:51,857 INFO [Listener at localhost/43935] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 08:18:51,858 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:51,859 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5da1de58{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.log.dir/,AVAILABLE} 2023-07-12 08:18:51,859 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:51,859 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1b61fa7d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 08:18:51,865 INFO [Listener at localhost/43935] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 08:18:51,866 INFO [Listener at localhost/43935] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 08:18:51,866 INFO [Listener at localhost/43935] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 08:18:51,866 INFO [Listener at localhost/43935] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 08:18:51,867 INFO [Listener at localhost/43935] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 08:18:51,867 INFO [Listener at localhost/43935] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@58053c49{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:51,869 INFO [Listener at localhost/43935] server.AbstractConnector(333): Started ServerConnector@7ec3878e{HTTP/1.1, (http/1.1)}{0.0.0.0:45337} 2023-07-12 08:18:51,869 INFO [Listener at localhost/43935] server.Server(415): Started @42426ms 2023-07-12 08:18:51,872 INFO [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(951): ClusterId : 97047a5b-1252-47d6-beab-37d33744b315 2023-07-12 08:18:51,872 DEBUG [RS:3;jenkins-hbase4:40145] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 08:18:51,874 DEBUG [RS:3;jenkins-hbase4:40145] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 08:18:51,874 DEBUG [RS:3;jenkins-hbase4:40145] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 08:18:51,876 DEBUG [RS:3;jenkins-hbase4:40145] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 08:18:51,879 DEBUG [RS:3;jenkins-hbase4:40145] zookeeper.ReadOnlyZKClient(139): Connect 0x19b6c050 to 127.0.0.1:54034 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 08:18:51,884 DEBUG [RS:3;jenkins-hbase4:40145] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@17713d16, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 08:18:51,884 DEBUG [RS:3;jenkins-hbase4:40145] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@134eab20, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:51,893 DEBUG [RS:3;jenkins-hbase4:40145] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:40145 2023-07-12 08:18:51,893 INFO [RS:3;jenkins-hbase4:40145] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 08:18:51,893 INFO [RS:3;jenkins-hbase4:40145] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 08:18:51,893 DEBUG [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 08:18:51,893 INFO [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36711,1689149930395 with isa=jenkins-hbase4.apache.org/172.31.14.131:40145, startcode=1689149931835 2023-07-12 08:18:51,894 DEBUG [RS:3;jenkins-hbase4:40145] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 08:18:51,896 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51703, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 08:18:51,897 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36711] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:51,897 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 08:18:51,897 DEBUG [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a 2023-07-12 08:18:51,897 DEBUG [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41445 2023-07-12 08:18:51,897 DEBUG [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39521 2023-07-12 08:18:51,904 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:51,904 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:51,904 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:51,904 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:51,904 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:51,905 DEBUG [RS:3;jenkins-hbase4:40145] zookeeper.ZKUtil(162): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:51,905 WARN [RS:3;jenkins-hbase4:40145] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 08:18:51,905 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40145,1689149931835] 2023-07-12 08:18:51,905 INFO [RS:3;jenkins-hbase4:40145] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 08:18:51,905 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:51,905 DEBUG [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:51,905 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:51,905 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 08:18:51,906 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:51,906 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:51,906 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:51,907 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:51,907 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 08:18:51,907 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:51,907 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:51,907 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:51,908 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:51,908 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:51,909 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:51,910 DEBUG [RS:3;jenkins-hbase4:40145] zookeeper.ZKUtil(162): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:51,910 DEBUG [RS:3;jenkins-hbase4:40145] zookeeper.ZKUtil(162): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:51,910 DEBUG [RS:3;jenkins-hbase4:40145] zookeeper.ZKUtil(162): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:51,911 DEBUG [RS:3;jenkins-hbase4:40145] zookeeper.ZKUtil(162): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:51,911 DEBUG [RS:3;jenkins-hbase4:40145] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 08:18:51,912 INFO [RS:3;jenkins-hbase4:40145] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 08:18:51,913 INFO [RS:3;jenkins-hbase4:40145] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 08:18:51,914 INFO [RS:3;jenkins-hbase4:40145] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 08:18:51,914 INFO [RS:3;jenkins-hbase4:40145] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:51,914 INFO [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 08:18:51,915 INFO [RS:3;jenkins-hbase4:40145] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:51,915 DEBUG [RS:3;jenkins-hbase4:40145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:51,916 DEBUG [RS:3;jenkins-hbase4:40145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:51,916 DEBUG [RS:3;jenkins-hbase4:40145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:51,916 DEBUG [RS:3;jenkins-hbase4:40145] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:51,916 DEBUG [RS:3;jenkins-hbase4:40145] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:51,916 DEBUG [RS:3;jenkins-hbase4:40145] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 08:18:51,916 DEBUG [RS:3;jenkins-hbase4:40145] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:51,916 DEBUG [RS:3;jenkins-hbase4:40145] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:51,916 DEBUG [RS:3;jenkins-hbase4:40145] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:51,916 DEBUG [RS:3;jenkins-hbase4:40145] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 08:18:51,917 INFO [RS:3;jenkins-hbase4:40145] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:51,917 INFO [RS:3;jenkins-hbase4:40145] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:51,917 INFO [RS:3;jenkins-hbase4:40145] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:51,927 INFO [RS:3;jenkins-hbase4:40145] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 08:18:51,927 INFO [RS:3;jenkins-hbase4:40145] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40145,1689149931835-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 08:18:51,938 INFO [RS:3;jenkins-hbase4:40145] regionserver.Replication(203): jenkins-hbase4.apache.org,40145,1689149931835 started 2023-07-12 08:18:51,938 INFO [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40145,1689149931835, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40145, sessionid=0x101589cfe88000b 2023-07-12 08:18:51,938 DEBUG [RS:3;jenkins-hbase4:40145] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 08:18:51,938 DEBUG [RS:3;jenkins-hbase4:40145] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:51,938 DEBUG [RS:3;jenkins-hbase4:40145] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40145,1689149931835' 2023-07-12 08:18:51,938 DEBUG [RS:3;jenkins-hbase4:40145] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 08:18:51,938 DEBUG [RS:3;jenkins-hbase4:40145] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 08:18:51,938 DEBUG [RS:3;jenkins-hbase4:40145] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 08:18:51,938 DEBUG [RS:3;jenkins-hbase4:40145] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 08:18:51,938 DEBUG [RS:3;jenkins-hbase4:40145] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:51,938 DEBUG [RS:3;jenkins-hbase4:40145] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40145,1689149931835' 2023-07-12 08:18:51,938 DEBUG [RS:3;jenkins-hbase4:40145] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 08:18:51,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:51,939 DEBUG [RS:3;jenkins-hbase4:40145] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 08:18:51,939 DEBUG [RS:3;jenkins-hbase4:40145] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 08:18:51,939 INFO [RS:3;jenkins-hbase4:40145] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 08:18:51,939 INFO [RS:3;jenkins-hbase4:40145] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 08:18:51,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:51,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:51,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:51,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:51,945 DEBUG [hconnection-0x51656034-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:51,947 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59490, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:51,950 DEBUG [hconnection-0x51656034-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 08:18:51,951 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58752, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 08:18:51,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:51,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:51,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36711] to rsgroup master 2023-07-12 08:18:51,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:51,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:45660 deadline: 1689151131955, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. 2023-07-12 08:18:51,956 WARN [Listener at localhost/43935] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:51,957 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:51,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:51,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:51,958 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33557, jenkins-hbase4.apache.org:39103, jenkins-hbase4.apache.org:40145, jenkins-hbase4.apache.org:44351], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:51,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:51,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:52,007 INFO [Listener at localhost/43935] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=567 (was 504) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:33557Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@5ae4348e[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp832017075-2251 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689149930790 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: Listener at localhost/36551-SendThread(127.0.0.1:63658) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x4ee9282e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data3/current/BP-819731304-172.31.14.131-1689149929669 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43935.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1482504103-2223 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp329545773-2562-acceptor-0@766b081d-ServerConnector@7ec3878e{HTTP/1.1, (http/1.1)}{0.0.0.0:45337} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1495628744_17 at /127.0.0.1:44656 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-559-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-554-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@59f5ce37 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1961198180_17 at /127.0.0.1:58772 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x4ee9282e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1522815941.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 42867 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1495628744_17 at /127.0.0.1:44646 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp381834035-2297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51656034-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@6aa3eac9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@e46da2 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43935-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1585530438-2195 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36511 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x19b6c050-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1935520565@qtp-1765876619-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35197 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: jenkins-hbase4:44351Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp1585530438-2197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 41445 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase4:40145Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:41445 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:41445 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 0 on default port 36511 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1585530438-2194 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_816319892_17 at /127.0.0.1:46042 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1482504103-2227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1585530438-2196 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1585530438-2193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1234586889@qtp-113027777-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63658@0x09f33695-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:41445 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp329545773-2565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:39103 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x1ecadc90-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-14b0ba11-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 41445 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1010604693_17 at /127.0.0.1:46050 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1482504103-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x763fdb7d-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_816319892_17 at /127.0.0.1:44632 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1585530438-2190 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:41039 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/36551-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/43935-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_816319892_17 at /127.0.0.1:45990 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-72dfe46e-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp832017075-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 36511 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36711,1689149930395 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp755863144-2282-acceptor-0@ddc871-ServerConnector@48d26a39{HTTP/1.1, (http/1.1)}{0.0.0.0:35961} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp381834035-2296-acceptor-0@4a8e55b7-ServerConnector@1805bf9b{HTTP/1.1, (http/1.1)}{0.0.0.0:44979} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689149930789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x763fdb7d-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1585530438-2191-acceptor-0@62d25bfb-ServerConnector@67d7b2b8{HTTP/1.1, (http/1.1)}{0.0.0.0:39521} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1010604693_17 at /127.0.0.1:44636 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1585530438-2192 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:54034 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: Listener at localhost/43935.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1010604693_17 at /127.0.0.1:44652 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp832017075-2258 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 41445 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1482504103-2228 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51656034-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1706e5e4-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@459b4b79 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:2;jenkins-hbase4:39103-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x1689d202-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: BP-819731304-172.31.14.131-1689149929669 heartbeating to localhost/127.0.0.1:41445 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:33557-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data1/current/BP-819731304-172.31.14.131-1689149929669 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-46a2541b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data4/current/BP-819731304-172.31.14.131-1689149929669 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp755863144-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ProcessThread(sid:0 cport:54034): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RS:1;jenkins-hbase4:33557 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x2e3df672-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Listener at localhost/43935-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp329545773-2563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x2e3df672-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43935-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x763fdb7d-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 43935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:3;jenkins-hbase4:40145-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 42867 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x763fdb7d-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:36711 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1031943248@qtp-673991938-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36645 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: pool-545-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-549-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43935-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1482504103-2221 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp381834035-2295 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a-prefix:jenkins-hbase4.apache.org,39103,1689149930544 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp381834035-2298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp832017075-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-540-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:41445 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:41039 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:41445 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x763fdb7d-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 42867 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-544-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x3364c31e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:44351 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp755863144-2288 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-819731304-172.31.14.131-1689149929669 heartbeating to localhost/127.0.0.1:41445 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:41039 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x19b6c050-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:41039 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x19b6c050 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1522815941.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 36511 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:44351-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:41039 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:41039 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp755863144-2281 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43935.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp755863144-2286 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-563-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-819731304-172.31.14.131-1689149929669 heartbeating to localhost/127.0.0.1:41445 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1961198180_17 at /127.0.0.1:58754 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp755863144-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a-prefix:jenkins-hbase4.apache.org,33557,1689149930502.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1961198180_17 at /127.0.0.1:46018 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 36511 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@3cc0e5fa java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp832017075-2252-acceptor-0@36bc4cb1-ServerConnector@33f3befb{HTTP/1.1, (http/1.1)}{0.0.0.0:34673} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x1689d202 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1522815941.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 41445 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@642627e0 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: hconnection-0x763fdb7d-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp329545773-2560 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1276560029@qtp-1765876619-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x1ecadc90-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x4efaa00c-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1495628744_17 at /127.0.0.1:46056 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-550-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1482504103-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp832017075-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:41039 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 42867 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1495628744_17 at /127.0.0.1:58816 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_816319892_17 at /127.0.0.1:58796 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7754bef0 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 36511 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:41039 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1482504103-2222-acceptor-0@63e827e1-ServerConnector@1c0eb1ab{HTTP/1.1, (http/1.1)}{0.0.0.0:34503} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:41445 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1010604693_17 at /127.0.0.1:58812 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x3364c31e-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 43935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 1434684253@qtp-965695712-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44331 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: 2028003843@qtp-673991938-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp329545773-2566 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43935-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-2c4ac00-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x4ee9282e-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:39103Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 43935 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp381834035-2299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(1652921730) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@56c1377a[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp329545773-2564 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp381834035-2292 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43935-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x7d87758a-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: IPC Server handler 0 on default port 42867 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 2 on default port 43935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-538-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp832017075-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43935-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/43935-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/43935.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a-prefix:jenkins-hbase4.apache.org,44351,1689149930457 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:41445 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp381834035-2293 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp832017075-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@236731b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43935-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp755863144-2284 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp329545773-2567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data2/current/BP-819731304-172.31.14.131-1689149929669 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43935-SendThread(127.0.0.1:54034) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:41039 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a-prefix:jenkins-hbase4.apache.org,33557,1689149930502 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x1ecadc90 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1522815941.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 42867 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1495628744_17 at /127.0.0.1:46066 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1995487635@qtp-965695712-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: LeaseRenewer:jenkins@localhost:41445 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: globalEventExecutor-1-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) io.netty.util.concurrent.GlobalEventExecutor.takeTask(GlobalEventExecutor.java:95) io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:239) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x7d87758a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data6/current/BP-819731304-172.31.14.131-1689149929669 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x1689d202-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1961198180_17 at /127.0.0.1:44614 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43935-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x7d87758a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1522815941.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@52ddfca7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@575948b0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 43935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63658@0x09f33695-SendThread(127.0.0.1:63658) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x2e3df672 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1522815941.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46573,1689149925661 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: IPC Server handler 2 on default port 41445 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:41445 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d472faa java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@789b5a20 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@727af871 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@ff264a2 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1482504103-2226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 41445 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-819731304-172.31.14.131-1689149929669:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1824667382) connection to localhost/127.0.0.1:41445 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54034@0x3364c31e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1522815941.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x763fdb7d-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43935 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:40145 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x763fdb7d-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp755863144-2287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63658@0x09f33695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1522815941.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1495628744_17 at /127.0.0.1:58820 [Receiving block BP-819731304-172.31.14.131-1689149929669:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 274554793@qtp-113027777-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35817 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp381834035-2294 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp329545773-2561 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData-prefix:jenkins-hbase4.apache.org,36711,1689149930395 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data5/current/BP-819731304-172.31.14.131-1689149929669 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=857 (was 769) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 563), ProcessCount=174 (was 174), AvailableMemoryMB=3005 (was 3215) 2023-07-12 08:18:52,010 WARN [Listener at localhost/43935] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-12 08:18:52,027 INFO [Listener at localhost/43935] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=567, OpenFileDescriptor=857, MaxFileDescriptor=60000, SystemLoadAverage=528, ProcessCount=174, AvailableMemoryMB=3005 2023-07-12 08:18:52,027 WARN [Listener at localhost/43935] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-12 08:18:52,027 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-12 08:18:52,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:52,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:52,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:52,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:52,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:52,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:52,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:52,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:52,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:52,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:52,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:52,041 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:52,041 INFO [RS:3;jenkins-hbase4:40145] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40145%2C1689149931835, suffix=, logDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,40145,1689149931835, archiveDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/oldWALs, maxLogs=32 2023-07-12 08:18:52,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:52,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:52,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:52,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:52,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:52,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:52,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:52,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36711] to rsgroup master 2023-07-12 08:18:52,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:52,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:45660 deadline: 1689151132052, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. 2023-07-12 08:18:52,054 WARN [Listener at localhost/43935] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:52,055 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:52,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:52,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:52,059 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33557, jenkins-hbase4.apache.org:39103, jenkins-hbase4.apache.org:40145, jenkins-hbase4.apache.org:44351], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:52,060 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK] 2023-07-12 08:18:52,060 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK] 2023-07-12 08:18:52,060 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK] 2023-07-12 08:18:52,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:52,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:52,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:52,062 INFO [RS:3;jenkins-hbase4:40145] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/WALs/jenkins-hbase4.apache.org,40145,1689149931835/jenkins-hbase4.apache.org%2C40145%2C1689149931835.1689149932041 2023-07-12 08:18:52,063 DEBUG [RS:3;jenkins-hbase4:40145] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46145,DS-93c0fd68-a446-4c68-b58d-263ac6cebd5b,DISK], DatanodeInfoWithStorage[127.0.0.1:44487,DS-4f87458b-b98c-42e9-8ce2-822870e8f0b7,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-656d82d9-0c1d-48b3-986d-ea15e5ce69cb,DISK]] 2023-07-12 08:18:52,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 08:18:52,064 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:52,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-12 08:18:52,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 08:18:52,066 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:52,067 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:52,067 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:52,070 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 08:18:52,071 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/default/t1/f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:52,072 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/default/t1/f832ac29576fae8d237048e731640ec1 empty. 2023-07-12 08:18:52,072 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/default/t1/f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:52,072 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 08:18:52,095 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-12 08:18:52,096 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => f832ac29576fae8d237048e731640ec1, NAME => 't1,,1689149932062.f832ac29576fae8d237048e731640ec1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp 2023-07-12 08:18:52,104 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-12 08:18:52,119 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689149932062.f832ac29576fae8d237048e731640ec1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:52,119 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing f832ac29576fae8d237048e731640ec1, disabling compactions & flushes 2023-07-12 08:18:52,120 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689149932062.f832ac29576fae8d237048e731640ec1. 2023-07-12 08:18:52,120 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689149932062.f832ac29576fae8d237048e731640ec1. 2023-07-12 08:18:52,120 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689149932062.f832ac29576fae8d237048e731640ec1. after waiting 0 ms 2023-07-12 08:18:52,120 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689149932062.f832ac29576fae8d237048e731640ec1. 2023-07-12 08:18:52,120 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689149932062.f832ac29576fae8d237048e731640ec1. 2023-07-12 08:18:52,120 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for f832ac29576fae8d237048e731640ec1: 2023-07-12 08:18:52,122 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 08:18:52,123 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689149932062.f832ac29576fae8d237048e731640ec1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689149932123"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149932123"}]},"ts":"1689149932123"} 2023-07-12 08:18:52,126 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 08:18:52,128 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 08:18:52,129 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149932128"}]},"ts":"1689149932128"} 2023-07-12 08:18:52,130 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-12 08:18:52,133 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 08:18:52,133 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 08:18:52,133 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 08:18:52,133 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 08:18:52,133 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 08:18:52,133 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 08:18:52,134 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=f832ac29576fae8d237048e731640ec1, ASSIGN}] 2023-07-12 08:18:52,134 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=f832ac29576fae8d237048e731640ec1, ASSIGN 2023-07-12 08:18:52,136 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=f832ac29576fae8d237048e731640ec1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33557,1689149930502; forceNewPlan=false, retain=false 2023-07-12 08:18:52,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 08:18:52,287 INFO [jenkins-hbase4:36711] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 08:18:52,288 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f832ac29576fae8d237048e731640ec1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:52,288 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689149932062.f832ac29576fae8d237048e731640ec1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689149932288"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149932288"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149932288"}]},"ts":"1689149932288"} 2023-07-12 08:18:52,290 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure f832ac29576fae8d237048e731640ec1, server=jenkins-hbase4.apache.org,33557,1689149930502}] 2023-07-12 08:18:52,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 08:18:52,446 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689149932062.f832ac29576fae8d237048e731640ec1. 2023-07-12 08:18:52,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f832ac29576fae8d237048e731640ec1, NAME => 't1,,1689149932062.f832ac29576fae8d237048e731640ec1.', STARTKEY => '', ENDKEY => ''} 2023-07-12 08:18:52,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:52,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689149932062.f832ac29576fae8d237048e731640ec1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 08:18:52,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:52,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:52,448 INFO [StoreOpener-f832ac29576fae8d237048e731640ec1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:52,449 DEBUG [StoreOpener-f832ac29576fae8d237048e731640ec1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/default/t1/f832ac29576fae8d237048e731640ec1/cf1 2023-07-12 08:18:52,449 DEBUG [StoreOpener-f832ac29576fae8d237048e731640ec1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/default/t1/f832ac29576fae8d237048e731640ec1/cf1 2023-07-12 08:18:52,449 INFO [StoreOpener-f832ac29576fae8d237048e731640ec1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f832ac29576fae8d237048e731640ec1 columnFamilyName cf1 2023-07-12 08:18:52,450 INFO [StoreOpener-f832ac29576fae8d237048e731640ec1-1] regionserver.HStore(310): Store=f832ac29576fae8d237048e731640ec1/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 08:18:52,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/default/t1/f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:52,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/default/t1/f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:52,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:52,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/default/t1/f832ac29576fae8d237048e731640ec1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 08:18:52,456 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f832ac29576fae8d237048e731640ec1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10443331360, jitterRate=-0.027388975024223328}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 08:18:52,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f832ac29576fae8d237048e731640ec1: 2023-07-12 08:18:52,457 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689149932062.f832ac29576fae8d237048e731640ec1., pid=14, masterSystemTime=1689149932442 2023-07-12 08:18:52,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689149932062.f832ac29576fae8d237048e731640ec1. 2023-07-12 08:18:52,458 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689149932062.f832ac29576fae8d237048e731640ec1. 2023-07-12 08:18:52,459 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f832ac29576fae8d237048e731640ec1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:52,459 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689149932062.f832ac29576fae8d237048e731640ec1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689149932459"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689149932459"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689149932459"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689149932459"}]},"ts":"1689149932459"} 2023-07-12 08:18:52,464 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-12 08:18:52,464 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure f832ac29576fae8d237048e731640ec1, server=jenkins-hbase4.apache.org,33557,1689149930502 in 172 msec 2023-07-12 08:18:52,466 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 08:18:52,466 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=f832ac29576fae8d237048e731640ec1, ASSIGN in 331 msec 2023-07-12 08:18:52,467 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 08:18:52,467 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149932467"}]},"ts":"1689149932467"} 2023-07-12 08:18:52,469 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-12 08:18:52,472 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 08:18:52,474 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 410 msec 2023-07-12 08:18:52,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 08:18:52,668 INFO [Listener at localhost/43935] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-12 08:18:52,669 DEBUG [Listener at localhost/43935] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-12 08:18:52,669 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:52,671 INFO [Listener at localhost/43935] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-12 08:18:52,671 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:52,672 INFO [Listener at localhost/43935] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-12 08:18:52,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 08:18:52,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 08:18:52,677 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 08:18:52,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-12 08:18:52,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.14.131:45660 deadline: 1689149992673, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-12 08:18:52,679 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:52,680 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-12 08:18:52,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:52,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:52,781 INFO [Listener at localhost/43935] client.HBaseAdmin$15(890): Started disable of t1 2023-07-12 08:18:52,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-12 08:18:52,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-12 08:18:52,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 08:18:52,785 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149932785"}]},"ts":"1689149932785"} 2023-07-12 08:18:52,786 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-12 08:18:52,788 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-12 08:18:52,788 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=f832ac29576fae8d237048e731640ec1, UNASSIGN}] 2023-07-12 08:18:52,789 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=f832ac29576fae8d237048e731640ec1, UNASSIGN 2023-07-12 08:18:52,790 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=f832ac29576fae8d237048e731640ec1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:52,790 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689149932062.f832ac29576fae8d237048e731640ec1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689149932789"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689149932789"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689149932789"}]},"ts":"1689149932789"} 2023-07-12 08:18:52,791 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure f832ac29576fae8d237048e731640ec1, server=jenkins-hbase4.apache.org,33557,1689149930502}] 2023-07-12 08:18:52,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 08:18:52,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:52,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f832ac29576fae8d237048e731640ec1, disabling compactions & flushes 2023-07-12 08:18:52,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689149932062.f832ac29576fae8d237048e731640ec1. 2023-07-12 08:18:52,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689149932062.f832ac29576fae8d237048e731640ec1. 2023-07-12 08:18:52,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689149932062.f832ac29576fae8d237048e731640ec1. after waiting 0 ms 2023-07-12 08:18:52,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689149932062.f832ac29576fae8d237048e731640ec1. 2023-07-12 08:18:52,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/default/t1/f832ac29576fae8d237048e731640ec1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 08:18:52,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689149932062.f832ac29576fae8d237048e731640ec1. 2023-07-12 08:18:52,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f832ac29576fae8d237048e731640ec1: 2023-07-12 08:18:52,950 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=f832ac29576fae8d237048e731640ec1, regionState=CLOSED 2023-07-12 08:18:52,950 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689149932062.f832ac29576fae8d237048e731640ec1.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689149932949"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689149932949"}]},"ts":"1689149932949"} 2023-07-12 08:18:52,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:52,953 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-12 08:18:52,953 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure f832ac29576fae8d237048e731640ec1, server=jenkins-hbase4.apache.org,33557,1689149930502 in 160 msec 2023-07-12 08:18:52,954 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-12 08:18:52,954 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=f832ac29576fae8d237048e731640ec1, UNASSIGN in 165 msec 2023-07-12 08:18:52,955 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689149932955"}]},"ts":"1689149932955"} 2023-07-12 08:18:52,956 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-12 08:18:52,957 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-12 08:18:52,959 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 177 msec 2023-07-12 08:18:53,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 08:18:53,087 INFO [Listener at localhost/43935] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-12 08:18:53,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-12 08:18:53,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-12 08:18:53,092 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 08:18:53,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-12 08:18:53,093 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-12 08:18:53,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:53,097 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/default/t1/f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:53,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 08:18:53,100 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/default/t1/f832ac29576fae8d237048e731640ec1/cf1, FileablePath, hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/default/t1/f832ac29576fae8d237048e731640ec1/recovered.edits] 2023-07-12 08:18:53,105 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/default/t1/f832ac29576fae8d237048e731640ec1/recovered.edits/4.seqid to hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/archive/data/default/t1/f832ac29576fae8d237048e731640ec1/recovered.edits/4.seqid 2023-07-12 08:18:53,106 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/.tmp/data/default/t1/f832ac29576fae8d237048e731640ec1 2023-07-12 08:18:53,106 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 08:18:53,108 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-12 08:18:53,110 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-12 08:18:53,111 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-12 08:18:53,112 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-12 08:18:53,112 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-12 08:18:53,112 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689149932062.f832ac29576fae8d237048e731640ec1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689149933112"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:53,114 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 08:18:53,114 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f832ac29576fae8d237048e731640ec1, NAME => 't1,,1689149932062.f832ac29576fae8d237048e731640ec1.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 08:18:53,114 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-12 08:18:53,114 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689149933114"}]},"ts":"9223372036854775807"} 2023-07-12 08:18:53,118 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-12 08:18:53,121 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 08:18:53,122 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 33 msec 2023-07-12 08:18:53,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 08:18:53,200 INFO [Listener at localhost/43935] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-12 08:18:53,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:53,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:53,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:53,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:53,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:53,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:53,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:53,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:53,216 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:53,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:53,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:53,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:53,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36711] to rsgroup master 2023-07-12 08:18:53,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:53,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:45660 deadline: 1689151133225, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. 2023-07-12 08:18:53,226 WARN [Listener at localhost/43935] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:53,230 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:53,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,230 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33557, jenkins-hbase4.apache.org:39103, jenkins-hbase4.apache.org:40145, jenkins-hbase4.apache.org:44351], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:53,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:53,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:53,249 INFO [Listener at localhost/43935] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=581 (was 567) - Thread LEAK? -, OpenFileDescriptor=860 (was 857) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 528), ProcessCount=174 (was 174), AvailableMemoryMB=2991 (was 3005) 2023-07-12 08:18:53,250 WARN [Listener at localhost/43935] hbase.ResourceChecker(130): Thread=581 is superior to 500 2023-07-12 08:18:53,268 INFO [Listener at localhost/43935] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=581, OpenFileDescriptor=860, MaxFileDescriptor=60000, SystemLoadAverage=528, ProcessCount=174, AvailableMemoryMB=2990 2023-07-12 08:18:53,268 WARN [Listener at localhost/43935] hbase.ResourceChecker(130): Thread=581 is superior to 500 2023-07-12 08:18:53,268 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-12 08:18:53,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:53,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:53,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:53,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:53,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:53,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:53,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:53,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:53,280 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:53,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:53,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:53,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:53,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36711] to rsgroup master 2023-07-12 08:18:53,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:53,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45660 deadline: 1689151133289, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. 2023-07-12 08:18:53,290 WARN [Listener at localhost/43935] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:53,292 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:53,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,293 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33557, jenkins-hbase4.apache.org:39103, jenkins-hbase4.apache.org:40145, jenkins-hbase4.apache.org:44351], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:53,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:53,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:53,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 08:18:53,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:53,295 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-12 08:18:53,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 08:18:53,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 08:18:53,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:53,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:53,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:53,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:53,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:53,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:53,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:53,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:53,316 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:53,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:53,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:53,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:53,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36711] to rsgroup master 2023-07-12 08:18:53,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:53,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45660 deadline: 1689151133330, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. 2023-07-12 08:18:53,330 WARN [Listener at localhost/43935] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:53,332 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:53,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,333 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33557, jenkins-hbase4.apache.org:39103, jenkins-hbase4.apache.org:40145, jenkins-hbase4.apache.org:44351], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:53,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:53,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:53,353 INFO [Listener at localhost/43935] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=583 (was 581) - Thread LEAK? -, OpenFileDescriptor=860 (was 860), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 528), ProcessCount=174 (was 174), AvailableMemoryMB=2989 (was 2990) 2023-07-12 08:18:53,353 WARN [Listener at localhost/43935] hbase.ResourceChecker(130): Thread=583 is superior to 500 2023-07-12 08:18:53,375 INFO [Listener at localhost/43935] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=583, OpenFileDescriptor=860, MaxFileDescriptor=60000, SystemLoadAverage=528, ProcessCount=174, AvailableMemoryMB=2989 2023-07-12 08:18:53,376 WARN [Listener at localhost/43935] hbase.ResourceChecker(130): Thread=583 is superior to 500 2023-07-12 08:18:53,376 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-12 08:18:53,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:53,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:53,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:53,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:53,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:53,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:53,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:53,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:53,389 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:53,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:53,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:53,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:53,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36711] to rsgroup master 2023-07-12 08:18:53,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:53,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45660 deadline: 1689151133399, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. 2023-07-12 08:18:53,400 WARN [Listener at localhost/43935] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:53,402 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:53,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,403 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33557, jenkins-hbase4.apache.org:39103, jenkins-hbase4.apache.org:40145, jenkins-hbase4.apache.org:44351], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:53,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:53,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:53,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:53,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:53,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:53,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:53,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:53,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:53,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:53,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:53,418 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:53,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:53,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:53,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:53,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36711] to rsgroup master 2023-07-12 08:18:53,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:53,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45660 deadline: 1689151133427, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. 2023-07-12 08:18:53,427 WARN [Listener at localhost/43935] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:53,429 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:53,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,430 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33557, jenkins-hbase4.apache.org:39103, jenkins-hbase4.apache.org:40145, jenkins-hbase4.apache.org:44351], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:53,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:53,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:53,451 INFO [Listener at localhost/43935] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=583 (was 583), OpenFileDescriptor=859 (was 860), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 528), ProcessCount=174 (was 174), AvailableMemoryMB=2990 (was 2989) - AvailableMemoryMB LEAK? - 2023-07-12 08:18:53,451 WARN [Listener at localhost/43935] hbase.ResourceChecker(130): Thread=583 is superior to 500 2023-07-12 08:18:53,468 INFO [Listener at localhost/43935] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=583, OpenFileDescriptor=859, MaxFileDescriptor=60000, SystemLoadAverage=528, ProcessCount=174, AvailableMemoryMB=2989 2023-07-12 08:18:53,468 WARN [Listener at localhost/43935] hbase.ResourceChecker(130): Thread=583 is superior to 500 2023-07-12 08:18:53,468 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-12 08:18:53,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:53,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:53,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:53,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:53,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:53,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:53,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:53,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:53,479 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:53,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:53,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:53,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:53,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36711] to rsgroup master 2023-07-12 08:18:53,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:53,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45660 deadline: 1689151133489, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. 2023-07-12 08:18:53,489 WARN [Listener at localhost/43935] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:53,491 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:53,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,492 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33557, jenkins-hbase4.apache.org:39103, jenkins-hbase4.apache.org:40145, jenkins-hbase4.apache.org:44351], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:53,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:53,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:53,493 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-12 08:18:53,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-12 08:18:53,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 08:18:53,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 08:18:53,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:53,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 08:18:53,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-12 08:18:53,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 08:18:53,508 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:53,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-12 08:18:53,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 08:18:53,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-12 08:18:53,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:53,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:45660 deadline: 1689151133606, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-12 08:18:53,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 08:18:53,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-12 08:18:53,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 08:18:53,627 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 08:18:53,628 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-12 08:18:53,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 08:18:53,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-12 08:18:53,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 08:18:53,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 08:18:53,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 08:18:53,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:53,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-12 08:18:53,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 08:18:53,744 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 08:18:53,747 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 08:18:53,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 08:18:53,748 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 08:18:53,749 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 08:18:53,750 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 08:18:53,750 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 08:18:53,752 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 08:18:53,753 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-12 08:18:53,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 08:18:53,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-12 08:18:53,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 08:18:53,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 08:18:53,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:53,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:53,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:45660 deadline: 1689149993859, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-12 08:18:53,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:53,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:53,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:53,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:53,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:53,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-12 08:18:53,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 08:18:53,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:53,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 08:18:53,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 08:18:53,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 08:18:53,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 08:18:53,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 08:18:53,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 08:18:53,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 08:18:53,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 08:18:53,882 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 08:18:53,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 08:18:53,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 08:18:53,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 08:18:53,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 08:18:53,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 08:18:53,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36711] to rsgroup master 2023-07-12 08:18:53,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 08:18:53,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45660 deadline: 1689151133897, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. 2023-07-12 08:18:53,897 WARN [Listener at localhost/43935] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36711 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 08:18:53,900 INFO [Listener at localhost/43935] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 08:18:53,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 08:18:53,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 08:18:53,901 INFO [Listener at localhost/43935] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33557, jenkins-hbase4.apache.org:39103, jenkins-hbase4.apache.org:40145, jenkins-hbase4.apache.org:44351], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 08:18:53,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 08:18:53,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 08:18:53,934 INFO [Listener at localhost/43935] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=579 (was 583), OpenFileDescriptor=850 (was 859), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 528), ProcessCount=174 (was 174), AvailableMemoryMB=3001 (was 2989) - AvailableMemoryMB LEAK? - 2023-07-12 08:18:53,934 WARN [Listener at localhost/43935] hbase.ResourceChecker(130): Thread=579 is superior to 500 2023-07-12 08:18:53,934 INFO [Listener at localhost/43935] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 08:18:53,934 INFO [Listener at localhost/43935] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 08:18:53,935 DEBUG [Listener at localhost/43935] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7d87758a to 127.0.0.1:54034 2023-07-12 08:18:53,935 DEBUG [Listener at localhost/43935] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:53,937 DEBUG [Listener at localhost/43935] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 08:18:53,937 DEBUG [Listener at localhost/43935] util.JVMClusterUtil(257): Found active master hash=518714575, stopped=false 2023-07-12 08:18:53,938 DEBUG [Listener at localhost/43935] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 08:18:53,938 DEBUG [Listener at localhost/43935] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 08:18:53,938 INFO [Listener at localhost/43935] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36711,1689149930395 2023-07-12 08:18:53,941 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:53,942 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:53,942 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:53,942 INFO [Listener at localhost/43935] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 08:18:53,942 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:53,942 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:53,942 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 08:18:53,942 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:53,942 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:53,942 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:53,943 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:53,943 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 08:18:53,947 DEBUG [Listener at localhost/43935] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4ee9282e to 127.0.0.1:54034 2023-07-12 08:18:53,948 DEBUG [Listener at localhost/43935] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:53,948 INFO [Listener at localhost/43935] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44351,1689149930457' ***** 2023-07-12 08:18:53,948 INFO [Listener at localhost/43935] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 08:18:53,948 INFO [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:53,948 INFO [Listener at localhost/43935] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33557,1689149930502' ***** 2023-07-12 08:18:53,948 INFO [Listener at localhost/43935] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 08:18:53,948 INFO [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:53,950 INFO [Listener at localhost/43935] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39103,1689149930544' ***** 2023-07-12 08:18:53,950 INFO [Listener at localhost/43935] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 08:18:53,950 INFO [Listener at localhost/43935] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40145,1689149931835' ***** 2023-07-12 08:18:53,950 INFO [Listener at localhost/43935] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 08:18:53,950 INFO [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:53,951 INFO [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:53,957 INFO [RS:0;jenkins-hbase4:44351] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1f408f01{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:53,959 INFO [RS:0;jenkins-hbase4:44351] server.AbstractConnector(383): Stopped ServerConnector@1c0eb1ab{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:53,959 INFO [RS:1;jenkins-hbase4:33557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3117de2f{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:53,959 INFO [RS:0;jenkins-hbase4:44351] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:53,960 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 08:18:53,960 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:53,961 INFO [RS:0;jenkins-hbase4:44351] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ea72a52{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:53,961 INFO [RS:1;jenkins-hbase4:33557] server.AbstractConnector(383): Stopped ServerConnector@33f3befb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:53,961 INFO [RS:1;jenkins-hbase4:33557] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:53,961 INFO [RS:2;jenkins-hbase4:39103] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3720103e{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:53,961 INFO [RS:3;jenkins-hbase4:40145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@58053c49{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 08:18:53,962 INFO [RS:2;jenkins-hbase4:39103] server.AbstractConnector(383): Stopped ServerConnector@48d26a39{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:53,962 INFO [RS:3;jenkins-hbase4:40145] server.AbstractConnector(383): Stopped ServerConnector@7ec3878e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:53,962 INFO [RS:2;jenkins-hbase4:39103] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:53,962 INFO [RS:3;jenkins-hbase4:40145] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:53,964 INFO [RS:0;jenkins-hbase4:44351] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39bfefc6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:53,964 INFO [RS:1;jenkins-hbase4:33557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41412b54{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:53,964 INFO [RS:2;jenkins-hbase4:39103] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6d4375e7{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:53,964 INFO [RS:3;jenkins-hbase4:40145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1b61fa7d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:53,965 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 08:18:53,965 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 08:18:53,965 INFO [RS:2;jenkins-hbase4:39103] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@15be34c0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:53,966 INFO [RS:0;jenkins-hbase4:44351] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 08:18:53,966 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:53,966 INFO [RS:0;jenkins-hbase4:44351] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 08:18:53,967 INFO [RS:0;jenkins-hbase4:44351] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 08:18:53,965 INFO [RS:1;jenkins-hbase4:33557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ceb7e75{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:53,966 INFO [RS:3;jenkins-hbase4:40145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5da1de58{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:53,967 INFO [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(3305): Received CLOSE for e32c68350c776569b0b9e6b278ff09a0 2023-07-12 08:18:53,967 INFO [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:53,968 DEBUG [RS:0;jenkins-hbase4:44351] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2e3df672 to 127.0.0.1:54034 2023-07-12 08:18:53,968 DEBUG [RS:0;jenkins-hbase4:44351] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:53,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e32c68350c776569b0b9e6b278ff09a0, disabling compactions & flushes 2023-07-12 08:18:53,968 INFO [RS:2;jenkins-hbase4:39103] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 08:18:53,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:53,968 INFO [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 08:18:53,968 DEBUG [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1478): Online Regions={e32c68350c776569b0b9e6b278ff09a0=hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0.} 2023-07-12 08:18:53,968 INFO [RS:2;jenkins-hbase4:39103] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 08:18:53,968 DEBUG [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1504): Waiting on e32c68350c776569b0b9e6b278ff09a0 2023-07-12 08:18:53,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:53,968 INFO [RS:2;jenkins-hbase4:39103] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 08:18:53,968 INFO [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(3305): Received CLOSE for cd50f667456c8de6113c503bce79a76a 2023-07-12 08:18:53,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. after waiting 0 ms 2023-07-12 08:18:53,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:53,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e32c68350c776569b0b9e6b278ff09a0 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-12 08:18:53,972 INFO [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:53,972 DEBUG [RS:2;jenkins-hbase4:39103] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1689d202 to 127.0.0.1:54034 2023-07-12 08:18:53,972 DEBUG [RS:2;jenkins-hbase4:39103] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:53,972 INFO [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 08:18:53,972 DEBUG [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1478): Online Regions={cd50f667456c8de6113c503bce79a76a=hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a.} 2023-07-12 08:18:53,973 DEBUG [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1504): Waiting on cd50f667456c8de6113c503bce79a76a 2023-07-12 08:18:53,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cd50f667456c8de6113c503bce79a76a, disabling compactions & flushes 2023-07-12 08:18:53,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:53,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:53,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. after waiting 0 ms 2023-07-12 08:18:53,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:53,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing cd50f667456c8de6113c503bce79a76a 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-12 08:18:53,974 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:53,974 INFO [RS:3;jenkins-hbase4:40145] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 08:18:53,975 INFO [RS:3;jenkins-hbase4:40145] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 08:18:53,975 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 08:18:53,975 INFO [RS:1;jenkins-hbase4:33557] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 08:18:53,975 INFO [RS:3;jenkins-hbase4:40145] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 08:18:53,975 INFO [RS:1;jenkins-hbase4:33557] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 08:18:53,975 INFO [RS:1;jenkins-hbase4:33557] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 08:18:53,975 INFO [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:53,975 DEBUG [RS:1;jenkins-hbase4:33557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3364c31e to 127.0.0.1:54034 2023-07-12 08:18:53,975 DEBUG [RS:1;jenkins-hbase4:33557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:53,975 INFO [RS:1;jenkins-hbase4:33557] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 08:18:53,975 INFO [RS:1;jenkins-hbase4:33557] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 08:18:53,975 INFO [RS:1;jenkins-hbase4:33557] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 08:18:53,975 INFO [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 08:18:53,975 INFO [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:53,975 DEBUG [RS:3;jenkins-hbase4:40145] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x19b6c050 to 127.0.0.1:54034 2023-07-12 08:18:53,976 DEBUG [RS:3;jenkins-hbase4:40145] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:53,976 INFO [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40145,1689149931835; all regions closed. 2023-07-12 08:18:53,976 INFO [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 08:18:53,976 DEBUG [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-12 08:18:53,976 DEBUG [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 08:18:53,977 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 08:18:53,977 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 08:18:53,977 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 08:18:53,977 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 08:18:53,977 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 08:18:53,977 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-12 08:18:54,015 DEBUG [RS:3;jenkins-hbase4:40145] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/oldWALs 2023-07-12 08:18:54,015 INFO [RS:3;jenkins-hbase4:40145] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40145%2C1689149931835:(num 1689149932041) 2023-07-12 08:18:54,015 DEBUG [RS:3;jenkins-hbase4:40145] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:54,015 INFO [RS:3;jenkins-hbase4:40145] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:54,019 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:54,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0/.tmp/m/4c110dfe68104d2ead78e363c504cf96 2023-07-12 08:18:54,035 INFO [RS:3;jenkins-hbase4:40145] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:54,041 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4c110dfe68104d2ead78e363c504cf96 2023-07-12 08:18:54,042 INFO [RS:3;jenkins-hbase4:40145] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 08:18:54,042 INFO [RS:3;jenkins-hbase4:40145] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 08:18:54,042 INFO [RS:3;jenkins-hbase4:40145] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 08:18:54,046 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:54,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0/.tmp/m/4c110dfe68104d2ead78e363c504cf96 as hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0/m/4c110dfe68104d2ead78e363c504cf96 2023-07-12 08:18:54,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4c110dfe68104d2ead78e363c504cf96 2023-07-12 08:18:54,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0/m/4c110dfe68104d2ead78e363c504cf96, entries=12, sequenceid=29, filesize=5.4 K 2023-07-12 08:18:54,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for e32c68350c776569b0b9e6b278ff09a0 in 94ms, sequenceid=29, compaction requested=false 2023-07-12 08:18:54,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 08:18:54,065 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/.tmp/info/c5ae3962c5dc4c57b18228e8d83ce963 2023-07-12 08:18:54,073 INFO [RS:3;jenkins-hbase4:40145] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40145 2023-07-12 08:18:54,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/rsgroup/e32c68350c776569b0b9e6b278ff09a0/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-12 08:18:54,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 08:18:54,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a/.tmp/info/83df9fa49a214c9b8cbfc40d9a61b1c6 2023-07-12 08:18:54,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:54,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e32c68350c776569b0b9e6b278ff09a0: 2023-07-12 08:18:54,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689149931378.e32c68350c776569b0b9e6b278ff09a0. 2023-07-12 08:18:54,082 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:54,082 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:54,082 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:54,082 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:54,082 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:54,082 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:54,082 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:54,082 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40145,1689149931835 2023-07-12 08:18:54,082 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:54,084 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40145,1689149931835] 2023-07-12 08:18:54,084 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40145,1689149931835; numProcessing=1 2023-07-12 08:18:54,085 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40145,1689149931835 already deleted, retry=false 2023-07-12 08:18:54,085 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40145,1689149931835 expired; onlineServers=3 2023-07-12 08:18:54,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 83df9fa49a214c9b8cbfc40d9a61b1c6 2023-07-12 08:18:54,089 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c5ae3962c5dc4c57b18228e8d83ce963 2023-07-12 08:18:54,091 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a/.tmp/info/83df9fa49a214c9b8cbfc40d9a61b1c6 as hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a/info/83df9fa49a214c9b8cbfc40d9a61b1c6 2023-07-12 08:18:54,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 83df9fa49a214c9b8cbfc40d9a61b1c6 2023-07-12 08:18:54,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a/info/83df9fa49a214c9b8cbfc40d9a61b1c6, entries=3, sequenceid=9, filesize=5.0 K 2023-07-12 08:18:54,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for cd50f667456c8de6113c503bce79a76a in 130ms, sequenceid=9, compaction requested=false 2023-07-12 08:18:54,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 08:18:54,119 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/.tmp/rep_barrier/df8c23565f2c4f07bc4d706dd98f2530 2023-07-12 08:18:54,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/namespace/cd50f667456c8de6113c503bce79a76a/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 08:18:54,125 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:54,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cd50f667456c8de6113c503bce79a76a: 2023-07-12 08:18:54,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689149931266.cd50f667456c8de6113c503bce79a76a. 2023-07-12 08:18:54,126 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for df8c23565f2c4f07bc4d706dd98f2530 2023-07-12 08:18:54,141 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/.tmp/table/d235f36ea4684af1b2cc4d59887a100e 2023-07-12 08:18:54,146 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d235f36ea4684af1b2cc4d59887a100e 2023-07-12 08:18:54,146 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/.tmp/info/c5ae3962c5dc4c57b18228e8d83ce963 as hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/info/c5ae3962c5dc4c57b18228e8d83ce963 2023-07-12 08:18:54,151 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c5ae3962c5dc4c57b18228e8d83ce963 2023-07-12 08:18:54,151 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/info/c5ae3962c5dc4c57b18228e8d83ce963, entries=22, sequenceid=26, filesize=7.3 K 2023-07-12 08:18:54,152 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/.tmp/rep_barrier/df8c23565f2c4f07bc4d706dd98f2530 as hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/rep_barrier/df8c23565f2c4f07bc4d706dd98f2530 2023-07-12 08:18:54,156 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for df8c23565f2c4f07bc4d706dd98f2530 2023-07-12 08:18:54,156 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/rep_barrier/df8c23565f2c4f07bc4d706dd98f2530, entries=1, sequenceid=26, filesize=4.9 K 2023-07-12 08:18:54,157 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/.tmp/table/d235f36ea4684af1b2cc4d59887a100e as hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/table/d235f36ea4684af1b2cc4d59887a100e 2023-07-12 08:18:54,162 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d235f36ea4684af1b2cc4d59887a100e 2023-07-12 08:18:54,162 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/table/d235f36ea4684af1b2cc4d59887a100e, entries=6, sequenceid=26, filesize=5.1 K 2023-07-12 08:18:54,162 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 185ms, sequenceid=26, compaction requested=false 2023-07-12 08:18:54,163 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 08:18:54,168 INFO [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44351,1689149930457; all regions closed. 2023-07-12 08:18:54,177 DEBUG [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 08:18:54,177 INFO [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39103,1689149930544; all regions closed. 2023-07-12 08:18:54,180 DEBUG [RS:0;jenkins-hbase4:44351] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/oldWALs 2023-07-12 08:18:54,180 INFO [RS:0;jenkins-hbase4:44351] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44351%2C1689149930457:(num 1689149931000) 2023-07-12 08:18:54,180 DEBUG [RS:0;jenkins-hbase4:44351] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:54,180 INFO [RS:0;jenkins-hbase4:44351] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:54,182 INFO [RS:0;jenkins-hbase4:44351] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:54,182 INFO [RS:0;jenkins-hbase4:44351] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 08:18:54,182 INFO [RS:0;jenkins-hbase4:44351] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 08:18:54,182 INFO [RS:0;jenkins-hbase4:44351] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 08:18:54,182 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:54,183 INFO [RS:0;jenkins-hbase4:44351] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44351 2023-07-12 08:18:54,183 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-12 08:18:54,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 08:18:54,185 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 08:18:54,185 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 08:18:54,185 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 08:18:54,185 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:54,185 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:54,185 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:54,185 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44351,1689149930457 2023-07-12 08:18:54,186 DEBUG [RS:2;jenkins-hbase4:39103] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/oldWALs 2023-07-12 08:18:54,186 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44351,1689149930457] 2023-07-12 08:18:54,186 INFO [RS:2;jenkins-hbase4:39103] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39103%2C1689149930544:(num 1689149931007) 2023-07-12 08:18:54,186 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44351,1689149930457; numProcessing=2 2023-07-12 08:18:54,186 DEBUG [RS:2;jenkins-hbase4:39103] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:54,186 INFO [RS:2;jenkins-hbase4:39103] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:54,186 INFO [RS:2;jenkins-hbase4:39103] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:54,186 INFO [RS:2;jenkins-hbase4:39103] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 08:18:54,186 INFO [RS:2;jenkins-hbase4:39103] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 08:18:54,186 INFO [RS:2;jenkins-hbase4:39103] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 08:18:54,186 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:54,187 INFO [RS:2;jenkins-hbase4:39103] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39103 2023-07-12 08:18:54,189 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44351,1689149930457 already deleted, retry=false 2023-07-12 08:18:54,189 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44351,1689149930457 expired; onlineServers=2 2023-07-12 08:18:54,190 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:54,190 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39103,1689149930544 2023-07-12 08:18:54,190 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:54,191 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39103,1689149930544] 2023-07-12 08:18:54,191 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39103,1689149930544; numProcessing=3 2023-07-12 08:18:54,192 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39103,1689149930544 already deleted, retry=false 2023-07-12 08:18:54,192 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39103,1689149930544 expired; onlineServers=1 2023-07-12 08:18:54,377 INFO [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33557,1689149930502; all regions closed. 2023-07-12 08:18:54,387 DEBUG [RS:1;jenkins-hbase4:33557] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/oldWALs 2023-07-12 08:18:54,387 INFO [RS:1;jenkins-hbase4:33557] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33557%2C1689149930502.meta:.meta(num 1689149931213) 2023-07-12 08:18:54,394 DEBUG [RS:1;jenkins-hbase4:33557] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/oldWALs 2023-07-12 08:18:54,394 INFO [RS:1;jenkins-hbase4:33557] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33557%2C1689149930502:(num 1689149931013) 2023-07-12 08:18:54,394 DEBUG [RS:1;jenkins-hbase4:33557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:54,394 INFO [RS:1;jenkins-hbase4:33557] regionserver.LeaseManager(133): Closed leases 2023-07-12 08:18:54,395 INFO [RS:1;jenkins-hbase4:33557] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 08:18:54,395 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:54,396 INFO [RS:1;jenkins-hbase4:33557] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33557 2023-07-12 08:18:54,400 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 08:18:54,400 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33557,1689149930502 2023-07-12 08:18:54,401 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33557,1689149930502] 2023-07-12 08:18:54,401 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33557,1689149930502; numProcessing=4 2023-07-12 08:18:54,402 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33557,1689149930502 already deleted, retry=false 2023-07-12 08:18:54,402 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33557,1689149930502 expired; onlineServers=0 2023-07-12 08:18:54,402 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36711,1689149930395' ***** 2023-07-12 08:18:54,402 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 08:18:54,403 DEBUG [M:0;jenkins-hbase4:36711] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ad563ca, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 08:18:54,403 INFO [M:0;jenkins-hbase4:36711] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 08:18:54,406 INFO [M:0;jenkins-hbase4:36711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@12396c25{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 08:18:54,407 INFO [M:0;jenkins-hbase4:36711] server.AbstractConnector(383): Stopped ServerConnector@67d7b2b8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:54,407 INFO [M:0;jenkins-hbase4:36711] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 08:18:54,407 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 08:18:54,408 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 08:18:54,408 INFO [M:0;jenkins-hbase4:36711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7057b85a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 08:18:54,409 INFO [M:0;jenkins-hbase4:36711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@69823956{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/hadoop.log.dir/,STOPPED} 2023-07-12 08:18:54,409 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 08:18:54,410 INFO [M:0;jenkins-hbase4:36711] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36711,1689149930395 2023-07-12 08:18:54,410 INFO [M:0;jenkins-hbase4:36711] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36711,1689149930395; all regions closed. 2023-07-12 08:18:54,410 DEBUG [M:0;jenkins-hbase4:36711] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 08:18:54,410 INFO [M:0;jenkins-hbase4:36711] master.HMaster(1491): Stopping master jetty server 2023-07-12 08:18:54,411 INFO [M:0;jenkins-hbase4:36711] server.AbstractConnector(383): Stopped ServerConnector@1805bf9b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 08:18:54,412 DEBUG [M:0;jenkins-hbase4:36711] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 08:18:54,412 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 08:18:54,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689149930789] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689149930789,5,FailOnTimeoutGroup] 2023-07-12 08:18:54,412 DEBUG [M:0;jenkins-hbase4:36711] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 08:18:54,412 INFO [M:0;jenkins-hbase4:36711] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 08:18:54,412 INFO [M:0;jenkins-hbase4:36711] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 08:18:54,412 INFO [M:0;jenkins-hbase4:36711] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-12 08:18:54,412 DEBUG [M:0;jenkins-hbase4:36711] master.HMaster(1512): Stopping service threads 2023-07-12 08:18:54,412 INFO [M:0;jenkins-hbase4:36711] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 08:18:54,412 ERROR [M:0;jenkins-hbase4:36711] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 08:18:54,413 INFO [M:0;jenkins-hbase4:36711] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 08:18:54,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689149930790] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689149930790,5,FailOnTimeoutGroup] 2023-07-12 08:18:54,413 DEBUG [M:0;jenkins-hbase4:36711] zookeeper.ZKUtil(398): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 08:18:54,413 WARN [M:0;jenkins-hbase4:36711] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 08:18:54,413 INFO [M:0;jenkins-hbase4:36711] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 08:18:54,413 INFO [M:0;jenkins-hbase4:36711] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 08:18:54,414 DEBUG [M:0;jenkins-hbase4:36711] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 08:18:54,414 INFO [M:0;jenkins-hbase4:36711] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:54,414 DEBUG [M:0;jenkins-hbase4:36711] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:54,414 DEBUG [M:0;jenkins-hbase4:36711] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 08:18:54,414 DEBUG [M:0;jenkins-hbase4:36711] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:54,414 INFO [M:0;jenkins-hbase4:36711] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.24 KB heapSize=90.66 KB 2023-07-12 08:18:54,414 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 08:18:54,440 INFO [M:0;jenkins-hbase4:36711] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.24 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9b8a2847f3b642939e2a3f7e1df76c68 2023-07-12 08:18:54,447 DEBUG [M:0;jenkins-hbase4:36711] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9b8a2847f3b642939e2a3f7e1df76c68 as hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9b8a2847f3b642939e2a3f7e1df76c68 2023-07-12 08:18:54,456 INFO [M:0;jenkins-hbase4:36711] regionserver.HStore(1080): Added hdfs://localhost:41445/user/jenkins/test-data/45de6d27-099c-b1ba-4176-adea42b3ba8a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9b8a2847f3b642939e2a3f7e1df76c68, entries=22, sequenceid=175, filesize=11.1 K 2023-07-12 08:18:54,457 INFO [M:0;jenkins-hbase4:36711] regionserver.HRegion(2948): Finished flush of dataSize ~76.24 KB/78067, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 42ms, sequenceid=175, compaction requested=false 2023-07-12 08:18:54,459 INFO [M:0;jenkins-hbase4:36711] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 08:18:54,459 DEBUG [M:0;jenkins-hbase4:36711] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 08:18:54,463 INFO [M:0;jenkins-hbase4:36711] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 08:18:54,463 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 08:18:54,464 INFO [M:0;jenkins-hbase4:36711] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36711 2023-07-12 08:18:54,466 DEBUG [M:0;jenkins-hbase4:36711] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36711,1689149930395 already deleted, retry=false 2023-07-12 08:18:54,539 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:54,539 INFO [RS:1;jenkins-hbase4:33557] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33557,1689149930502; zookeeper connection closed. 2023-07-12 08:18:54,539 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:33557-0x101589cfe880002, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:54,540 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@280559bf] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@280559bf 2023-07-12 08:18:54,639 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:54,639 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x101589cfe880003, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:54,639 INFO [RS:2;jenkins-hbase4:39103] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39103,1689149930544; zookeeper connection closed. 2023-07-12 08:18:54,640 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1d998958] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1d998958 2023-07-12 08:18:54,740 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:54,740 INFO [RS:0;jenkins-hbase4:44351] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44351,1689149930457; zookeeper connection closed. 2023-07-12 08:18:54,740 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:44351-0x101589cfe880001, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:54,740 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@30f2c1e5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@30f2c1e5 2023-07-12 08:18:54,840 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:54,840 INFO [RS:3;jenkins-hbase4:40145] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40145,1689149931835; zookeeper connection closed. 2023-07-12 08:18:54,840 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): regionserver:40145-0x101589cfe88000b, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:54,840 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3d086a0f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3d086a0f 2023-07-12 08:18:54,840 INFO [Listener at localhost/43935] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 08:18:54,940 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:54,940 INFO [M:0;jenkins-hbase4:36711] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36711,1689149930395; zookeeper connection closed. 2023-07-12 08:18:54,940 DEBUG [Listener at localhost/43935-EventThread] zookeeper.ZKWatcher(600): master:36711-0x101589cfe880000, quorum=127.0.0.1:54034, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 08:18:54,941 WARN [Listener at localhost/43935] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 08:18:54,945 INFO [Listener at localhost/43935] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:55,049 WARN [BP-819731304-172.31.14.131-1689149929669 heartbeating to localhost/127.0.0.1:41445] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 08:18:55,049 WARN [BP-819731304-172.31.14.131-1689149929669 heartbeating to localhost/127.0.0.1:41445] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-819731304-172.31.14.131-1689149929669 (Datanode Uuid c486b0df-5774-43d8-99d7-031dd7934dbb) service to localhost/127.0.0.1:41445 2023-07-12 08:18:55,050 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data5/current/BP-819731304-172.31.14.131-1689149929669] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:55,051 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data6/current/BP-819731304-172.31.14.131-1689149929669] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:55,054 WARN [Listener at localhost/43935] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 08:18:55,067 INFO [Listener at localhost/43935] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:55,170 WARN [BP-819731304-172.31.14.131-1689149929669 heartbeating to localhost/127.0.0.1:41445] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 08:18:55,170 WARN [BP-819731304-172.31.14.131-1689149929669 heartbeating to localhost/127.0.0.1:41445] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-819731304-172.31.14.131-1689149929669 (Datanode Uuid 58e2d8f7-dd04-4c9e-916c-f47f8ec3e74c) service to localhost/127.0.0.1:41445 2023-07-12 08:18:55,170 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data3/current/BP-819731304-172.31.14.131-1689149929669] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:55,171 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data4/current/BP-819731304-172.31.14.131-1689149929669] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:55,172 WARN [Listener at localhost/43935] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 08:18:55,175 INFO [Listener at localhost/43935] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:55,277 WARN [BP-819731304-172.31.14.131-1689149929669 heartbeating to localhost/127.0.0.1:41445] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 08:18:55,277 WARN [BP-819731304-172.31.14.131-1689149929669 heartbeating to localhost/127.0.0.1:41445] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-819731304-172.31.14.131-1689149929669 (Datanode Uuid e2d555f1-42f6-4c7a-b7b1-2b42acfa1735) service to localhost/127.0.0.1:41445 2023-07-12 08:18:55,278 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data1/current/BP-819731304-172.31.14.131-1689149929669] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:55,278 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/bd8765e2-7123-7b20-5329-98567442fa7e/cluster_74862f4d-c21f-9b0d-9616-edde874d7034/dfs/data/data2/current/BP-819731304-172.31.14.131-1689149929669] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 08:18:55,288 INFO [Listener at localhost/43935] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 08:18:55,403 INFO [Listener at localhost/43935] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 08:18:55,434 INFO [Listener at localhost/43935] hbase.HBaseTestingUtility(1293): Minicluster is down