2023-07-19 18:14:27,030 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb 2023-07-19 18:14:27,051 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-19 18:14:27,069 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-19 18:14:27,070 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8, deleteOnExit=true 2023-07-19 18:14:27,070 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-19 18:14:27,070 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/test.cache.data in system properties and HBase conf 2023-07-19 18:14:27,071 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.tmp.dir in system properties and HBase conf 2023-07-19 18:14:27,071 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir in system properties and HBase conf 2023-07-19 18:14:27,072 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-19 18:14:27,072 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-19 18:14:27,072 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-19 18:14:27,221 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-19 18:14:27,725 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-19 18:14:27,729 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-19 18:14:27,730 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-19 18:14:27,730 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-19 18:14:27,731 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 18:14:27,731 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-19 18:14:27,732 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-19 18:14:27,732 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 18:14:27,733 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 18:14:27,733 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-19 18:14:27,734 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/nfs.dump.dir in system properties and HBase conf 2023-07-19 18:14:27,734 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/java.io.tmpdir in system properties and HBase conf 2023-07-19 18:14:27,734 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 18:14:27,735 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-19 18:14:27,735 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-19 18:14:28,265 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 18:14:28,269 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 18:14:28,556 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-19 18:14:28,714 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-19 18:14:28,728 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:14:28,762 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:14:28,810 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/java.io.tmpdir/Jetty_localhost_43585_hdfs____b14ddp/webapp 2023-07-19 18:14:28,940 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43585 2023-07-19 18:14:28,950 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 18:14:28,950 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 18:14:29,447 WARN [Listener at localhost/41243] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:14:29,529 WARN [Listener at localhost/41243] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 18:14:29,553 WARN [Listener at localhost/41243] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:14:29,562 INFO [Listener at localhost/41243] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:14:29,568 INFO [Listener at localhost/41243] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/java.io.tmpdir/Jetty_localhost_45257_datanode____g1onmi/webapp 2023-07-19 18:14:29,697 INFO [Listener at localhost/41243] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45257 2023-07-19 18:14:30,149 WARN [Listener at localhost/44959] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:14:30,161 WARN [Listener at localhost/44959] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 18:14:30,164 WARN [Listener at localhost/44959] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:14:30,166 INFO [Listener at localhost/44959] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:14:30,175 INFO [Listener at localhost/44959] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/java.io.tmpdir/Jetty_localhost_36375_datanode____34xm9h/webapp 2023-07-19 18:14:30,283 INFO [Listener at localhost/44959] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36375 2023-07-19 18:14:30,297 WARN [Listener at localhost/35623] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:14:30,325 WARN [Listener at localhost/35623] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 18:14:30,329 WARN [Listener at localhost/35623] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:14:30,331 INFO [Listener at localhost/35623] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:14:30,337 INFO [Listener at localhost/35623] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/java.io.tmpdir/Jetty_localhost_39767_datanode____55sv09/webapp 2023-07-19 18:14:30,475 INFO [Listener at localhost/35623] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39767 2023-07-19 18:14:30,493 WARN [Listener at localhost/46039] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:14:30,693 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbced5a02eb84c0b7: Processing first storage report for DS-214dd5b4-56cd-4179-a190-89691ecc0162 from datanode f62b0187-6db6-43d7-a389-69e2232367be 2023-07-19 18:14:30,695 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbced5a02eb84c0b7: from storage DS-214dd5b4-56cd-4179-a190-89691ecc0162 node DatanodeRegistration(127.0.0.1:42841, datanodeUuid=f62b0187-6db6-43d7-a389-69e2232367be, infoPort=41041, infoSecurePort=0, ipcPort=46039, storageInfo=lv=-57;cid=testClusterID;nsid=1062735292;c=1689790468337), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-19 18:14:30,696 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6ff0333c13bf474b: Processing first storage report for DS-7e5ff312-bee9-418f-8326-eda7dc88166d from datanode 3f572fc8-b3d9-423f-88eb-2c4098dce574 2023-07-19 18:14:30,696 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6ff0333c13bf474b: from storage DS-7e5ff312-bee9-418f-8326-eda7dc88166d node DatanodeRegistration(127.0.0.1:42045, datanodeUuid=3f572fc8-b3d9-423f-88eb-2c4098dce574, infoPort=40351, infoSecurePort=0, ipcPort=44959, storageInfo=lv=-57;cid=testClusterID;nsid=1062735292;c=1689790468337), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:14:30,696 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbced5a02eb84c0b7: Processing first storage report for DS-ea23d908-0d50-406d-a0af-e61b503168bf from datanode f62b0187-6db6-43d7-a389-69e2232367be 2023-07-19 18:14:30,696 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbced5a02eb84c0b7: from storage DS-ea23d908-0d50-406d-a0af-e61b503168bf node DatanodeRegistration(127.0.0.1:42841, datanodeUuid=f62b0187-6db6-43d7-a389-69e2232367be, infoPort=41041, infoSecurePort=0, ipcPort=46039, storageInfo=lv=-57;cid=testClusterID;nsid=1062735292;c=1689790468337), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:14:30,696 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1e76db8be68d1936: Processing first storage report for DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c from datanode 462ffe16-dd7d-403c-a132-8d98d1e9d939 2023-07-19 18:14:30,697 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1e76db8be68d1936: from storage DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c node DatanodeRegistration(127.0.0.1:44697, datanodeUuid=462ffe16-dd7d-403c-a132-8d98d1e9d939, infoPort=33055, infoSecurePort=0, ipcPort=35623, storageInfo=lv=-57;cid=testClusterID;nsid=1062735292;c=1689790468337), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-19 18:14:30,697 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6ff0333c13bf474b: Processing first storage report for DS-5a1b0aa5-d0f5-41dc-b6cd-cba7a8984e3a from datanode 3f572fc8-b3d9-423f-88eb-2c4098dce574 2023-07-19 18:14:30,697 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6ff0333c13bf474b: from storage DS-5a1b0aa5-d0f5-41dc-b6cd-cba7a8984e3a node DatanodeRegistration(127.0.0.1:42045, datanodeUuid=3f572fc8-b3d9-423f-88eb-2c4098dce574, infoPort=40351, infoSecurePort=0, ipcPort=44959, storageInfo=lv=-57;cid=testClusterID;nsid=1062735292;c=1689790468337), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:14:30,697 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1e76db8be68d1936: Processing first storage report for DS-5c33eb07-5a15-42dd-a090-bc154baa568b from datanode 462ffe16-dd7d-403c-a132-8d98d1e9d939 2023-07-19 18:14:30,697 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1e76db8be68d1936: from storage DS-5c33eb07-5a15-42dd-a090-bc154baa568b node DatanodeRegistration(127.0.0.1:44697, datanodeUuid=462ffe16-dd7d-403c-a132-8d98d1e9d939, infoPort=33055, infoSecurePort=0, ipcPort=35623, storageInfo=lv=-57;cid=testClusterID;nsid=1062735292;c=1689790468337), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:14:30,899 DEBUG [Listener at localhost/46039] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb 2023-07-19 18:14:30,970 INFO [Listener at localhost/46039] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/zookeeper_0, clientPort=61716, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-19 18:14:30,984 INFO [Listener at localhost/46039] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61716 2023-07-19 18:14:30,992 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:30,995 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:31,287 INFO [Listener at localhost/46039] util.FSUtils(471): Created version file at hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475 with version=8 2023-07-19 18:14:31,287 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/hbase-staging 2023-07-19 18:14:31,296 DEBUG [Listener at localhost/46039] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-19 18:14:31,296 DEBUG [Listener at localhost/46039] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-19 18:14:31,296 DEBUG [Listener at localhost/46039] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-19 18:14:31,296 DEBUG [Listener at localhost/46039] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-19 18:14:31,755 INFO [Listener at localhost/46039] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-19 18:14:32,339 INFO [Listener at localhost/46039] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:14:32,378 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:32,378 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:32,379 INFO [Listener at localhost/46039] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:14:32,379 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:32,379 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:14:32,533 INFO [Listener at localhost/46039] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:14:32,616 DEBUG [Listener at localhost/46039] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-19 18:14:32,713 INFO [Listener at localhost/46039] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46739 2023-07-19 18:14:32,724 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:32,727 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:32,750 INFO [Listener at localhost/46039] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46739 connecting to ZooKeeper ensemble=127.0.0.1:61716 2023-07-19 18:14:32,808 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:467390x0, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:14:32,816 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46739-0x1017ecade2e0000 connected 2023-07-19 18:14:32,861 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:14:32,862 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:14:32,866 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:14:32,875 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46739 2023-07-19 18:14:32,875 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46739 2023-07-19 18:14:32,876 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46739 2023-07-19 18:14:32,886 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46739 2023-07-19 18:14:32,887 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46739 2023-07-19 18:14:32,926 INFO [Listener at localhost/46039] log.Log(170): Logging initialized @6634ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-19 18:14:33,054 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:14:33,055 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:14:33,055 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:14:33,057 INFO [Listener at localhost/46039] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-19 18:14:33,057 INFO [Listener at localhost/46039] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:14:33,057 INFO [Listener at localhost/46039] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:14:33,061 INFO [Listener at localhost/46039] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:14:33,122 INFO [Listener at localhost/46039] http.HttpServer(1146): Jetty bound to port 42985 2023-07-19 18:14:33,125 INFO [Listener at localhost/46039] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:14:33,165 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:33,168 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@51a5baa3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:14:33,169 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:33,170 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7522554c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:14:33,385 INFO [Listener at localhost/46039] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:14:33,401 INFO [Listener at localhost/46039] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:14:33,401 INFO [Listener at localhost/46039] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:14:33,404 INFO [Listener at localhost/46039] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 18:14:33,412 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:33,445 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@43d80c2b{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/java.io.tmpdir/jetty-0_0_0_0-42985-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5772203819155222823/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 18:14:33,457 INFO [Listener at localhost/46039] server.AbstractConnector(333): Started ServerConnector@5038041a{HTTP/1.1, (http/1.1)}{0.0.0.0:42985} 2023-07-19 18:14:33,457 INFO [Listener at localhost/46039] server.Server(415): Started @7165ms 2023-07-19 18:14:33,461 INFO [Listener at localhost/46039] master.HMaster(444): hbase.rootdir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475, hbase.cluster.distributed=false 2023-07-19 18:14:33,553 INFO [Listener at localhost/46039] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:14:33,553 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:33,554 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:33,554 INFO [Listener at localhost/46039] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:14:33,554 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:33,554 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:14:33,562 INFO [Listener at localhost/46039] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:14:33,566 INFO [Listener at localhost/46039] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40615 2023-07-19 18:14:33,569 INFO [Listener at localhost/46039] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 18:14:33,579 DEBUG [Listener at localhost/46039] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 18:14:33,580 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:33,583 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:33,585 INFO [Listener at localhost/46039] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40615 connecting to ZooKeeper ensemble=127.0.0.1:61716 2023-07-19 18:14:33,592 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:406150x0, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:14:33,593 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40615-0x1017ecade2e0001 connected 2023-07-19 18:14:33,593 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:14:33,595 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:14:33,596 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:14:33,597 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40615 2023-07-19 18:14:33,597 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40615 2023-07-19 18:14:33,597 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40615 2023-07-19 18:14:33,598 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40615 2023-07-19 18:14:33,598 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40615 2023-07-19 18:14:33,602 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:14:33,602 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:14:33,602 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:14:33,604 INFO [Listener at localhost/46039] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 18:14:33,604 INFO [Listener at localhost/46039] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:14:33,604 INFO [Listener at localhost/46039] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:14:33,605 INFO [Listener at localhost/46039] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:14:33,607 INFO [Listener at localhost/46039] http.HttpServer(1146): Jetty bound to port 43625 2023-07-19 18:14:33,608 INFO [Listener at localhost/46039] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:14:33,635 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:33,636 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4b56872c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:14:33,636 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:33,636 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@180451a2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:14:33,775 INFO [Listener at localhost/46039] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:14:33,777 INFO [Listener at localhost/46039] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:14:33,777 INFO [Listener at localhost/46039] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:14:33,777 INFO [Listener at localhost/46039] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 18:14:33,779 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:33,784 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@75640050{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/java.io.tmpdir/jetty-0_0_0_0-43625-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7426602517909854296/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:14:33,786 INFO [Listener at localhost/46039] server.AbstractConnector(333): Started ServerConnector@4aa1e459{HTTP/1.1, (http/1.1)}{0.0.0.0:43625} 2023-07-19 18:14:33,786 INFO [Listener at localhost/46039] server.Server(415): Started @7494ms 2023-07-19 18:14:33,800 INFO [Listener at localhost/46039] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:14:33,800 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:33,800 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:33,801 INFO [Listener at localhost/46039] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:14:33,801 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:33,801 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:14:33,801 INFO [Listener at localhost/46039] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:14:33,803 INFO [Listener at localhost/46039] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38251 2023-07-19 18:14:33,804 INFO [Listener at localhost/46039] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 18:14:33,805 DEBUG [Listener at localhost/46039] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 18:14:33,806 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:33,807 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:33,808 INFO [Listener at localhost/46039] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38251 connecting to ZooKeeper ensemble=127.0.0.1:61716 2023-07-19 18:14:33,812 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:382510x0, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:14:33,814 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): regionserver:382510x0, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:14:33,815 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): regionserver:382510x0, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:14:33,815 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38251-0x1017ecade2e0002 connected 2023-07-19 18:14:33,816 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:14:33,820 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38251 2023-07-19 18:14:33,820 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38251 2023-07-19 18:14:33,821 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38251 2023-07-19 18:14:33,821 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38251 2023-07-19 18:14:33,821 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38251 2023-07-19 18:14:33,825 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:14:33,825 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:14:33,825 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:14:33,826 INFO [Listener at localhost/46039] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 18:14:33,826 INFO [Listener at localhost/46039] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:14:33,826 INFO [Listener at localhost/46039] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:14:33,826 INFO [Listener at localhost/46039] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:14:33,827 INFO [Listener at localhost/46039] http.HttpServer(1146): Jetty bound to port 32929 2023-07-19 18:14:33,827 INFO [Listener at localhost/46039] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:14:33,830 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:33,830 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7e84c820{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:14:33,830 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:33,831 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34301e2c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:14:33,966 INFO [Listener at localhost/46039] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:14:33,967 INFO [Listener at localhost/46039] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:14:33,967 INFO [Listener at localhost/46039] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:14:33,967 INFO [Listener at localhost/46039] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 18:14:33,968 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:33,969 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7eecefb7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/java.io.tmpdir/jetty-0_0_0_0-32929-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8650736522677327572/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:14:33,970 INFO [Listener at localhost/46039] server.AbstractConnector(333): Started ServerConnector@7394d09{HTTP/1.1, (http/1.1)}{0.0.0.0:32929} 2023-07-19 18:14:33,970 INFO [Listener at localhost/46039] server.Server(415): Started @7679ms 2023-07-19 18:14:33,983 INFO [Listener at localhost/46039] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:14:33,983 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:33,984 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:33,984 INFO [Listener at localhost/46039] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:14:33,984 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:33,984 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:14:33,984 INFO [Listener at localhost/46039] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:14:33,986 INFO [Listener at localhost/46039] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43775 2023-07-19 18:14:33,987 INFO [Listener at localhost/46039] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 18:14:33,989 DEBUG [Listener at localhost/46039] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 18:14:33,990 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:33,991 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:33,992 INFO [Listener at localhost/46039] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43775 connecting to ZooKeeper ensemble=127.0.0.1:61716 2023-07-19 18:14:33,997 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:437750x0, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:14:33,998 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43775-0x1017ecade2e0003 connected 2023-07-19 18:14:33,999 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:14:33,999 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:14:34,000 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:14:34,005 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43775 2023-07-19 18:14:34,005 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43775 2023-07-19 18:14:34,006 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43775 2023-07-19 18:14:34,006 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43775 2023-07-19 18:14:34,009 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43775 2023-07-19 18:14:34,012 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:14:34,013 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:14:34,013 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:14:34,014 INFO [Listener at localhost/46039] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 18:14:34,014 INFO [Listener at localhost/46039] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:14:34,014 INFO [Listener at localhost/46039] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:14:34,014 INFO [Listener at localhost/46039] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:14:34,016 INFO [Listener at localhost/46039] http.HttpServer(1146): Jetty bound to port 40521 2023-07-19 18:14:34,016 INFO [Listener at localhost/46039] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:14:34,021 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:34,021 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b0e15fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:14:34,022 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:34,022 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@22b75a27{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:14:34,148 INFO [Listener at localhost/46039] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:14:34,149 INFO [Listener at localhost/46039] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:14:34,149 INFO [Listener at localhost/46039] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:14:34,149 INFO [Listener at localhost/46039] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 18:14:34,150 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:34,151 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@60f62ff2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/java.io.tmpdir/jetty-0_0_0_0-40521-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6876760583941388186/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:14:34,153 INFO [Listener at localhost/46039] server.AbstractConnector(333): Started ServerConnector@447a00c4{HTTP/1.1, (http/1.1)}{0.0.0.0:40521} 2023-07-19 18:14:34,153 INFO [Listener at localhost/46039] server.Server(415): Started @7861ms 2023-07-19 18:14:34,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:14:34,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@17b5595c{HTTP/1.1, (http/1.1)}{0.0.0.0:33821} 2023-07-19 18:14:34,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @7875ms 2023-07-19 18:14:34,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,46739,1689790471527 2023-07-19 18:14:34,181 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 18:14:34,182 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,46739,1689790471527 2023-07-19 18:14:34,208 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:14:34,208 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:14:34,208 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:14:34,208 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:14:34,208 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:14:34,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 18:14:34,215 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 18:14:34,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,46739,1689790471527 from backup master directory 2023-07-19 18:14:34,221 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,46739,1689790471527 2023-07-19 18:14:34,221 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 18:14:34,222 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:14:34,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,46739,1689790471527 2023-07-19 18:14:34,226 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-19 18:14:34,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-19 18:14:34,378 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/hbase.id with ID: 29f10742-1ec2-44d0-8ada-95f2b5b24d94 2023-07-19 18:14:34,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:34,442 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:14:34,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x39ac2b40 to 127.0.0.1:61716 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:14:34,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@b73a78e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:14:34,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:14:34,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-19 18:14:34,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-19 18:14:34,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-19 18:14:34,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-19 18:14:34,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-19 18:14:34,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:14:34,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/data/master/store-tmp 2023-07-19 18:14:34,732 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:34,733 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 18:14:34,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:14:34,733 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:14:34,733 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 18:14:34,733 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:14:34,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:14:34,733 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 18:14:34,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/WALs/jenkins-hbase4.apache.org,46739,1689790471527 2023-07-19 18:14:34,764 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46739%2C1689790471527, suffix=, logDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/WALs/jenkins-hbase4.apache.org,46739,1689790471527, archiveDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/oldWALs, maxLogs=10 2023-07-19 18:14:34,838 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK] 2023-07-19 18:14:34,838 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK] 2023-07-19 18:14:34,838 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK] 2023-07-19 18:14:34,848 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-19 18:14:34,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/WALs/jenkins-hbase4.apache.org,46739,1689790471527/jenkins-hbase4.apache.org%2C46739%2C1689790471527.1689790474775 2023-07-19 18:14:34,942 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK], DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK], DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK]] 2023-07-19 18:14:34,943 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:34,944 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:34,950 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:14:34,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:14:35,045 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:14:35,054 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-19 18:14:35,095 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-19 18:14:35,113 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:35,120 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:14:35,122 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:14:35,141 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:14:35,151 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:35,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11379194400, jitterRate=0.05977006256580353}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:35,152 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 18:14:35,157 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-19 18:14:35,190 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-19 18:14:35,190 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-19 18:14:35,194 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-19 18:14:35,196 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 2 msec 2023-07-19 18:14:35,242 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 45 msec 2023-07-19 18:14:35,242 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-19 18:14:35,276 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-19 18:14:35,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-19 18:14:35,292 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-19 18:14:35,299 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-19 18:14:35,304 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-19 18:14:35,310 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:14:35,311 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-19 18:14:35,312 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-19 18:14:35,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-19 18:14:35,332 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:14:35,332 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:14:35,332 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:14:35,332 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:14:35,333 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:14:35,333 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,46739,1689790471527, sessionid=0x1017ecade2e0000, setting cluster-up flag (Was=false) 2023-07-19 18:14:35,359 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:14:35,365 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-19 18:14:35,367 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46739,1689790471527 2023-07-19 18:14:35,373 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:14:35,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-19 18:14:35,384 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46739,1689790471527 2023-07-19 18:14:35,387 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.hbase-snapshot/.tmp 2023-07-19 18:14:35,464 INFO [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(951): ClusterId : 29f10742-1ec2-44d0-8ada-95f2b5b24d94 2023-07-19 18:14:35,468 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(951): ClusterId : 29f10742-1ec2-44d0-8ada-95f2b5b24d94 2023-07-19 18:14:35,468 INFO [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(951): ClusterId : 29f10742-1ec2-44d0-8ada-95f2b5b24d94 2023-07-19 18:14:35,474 DEBUG [RS:2;jenkins-hbase4:43775] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 18:14:35,477 DEBUG [RS:1;jenkins-hbase4:38251] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 18:14:35,474 DEBUG [RS:0;jenkins-hbase4:40615] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 18:14:35,481 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-19 18:14:35,485 DEBUG [RS:2;jenkins-hbase4:43775] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 18:14:35,485 DEBUG [RS:1;jenkins-hbase4:38251] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 18:14:35,485 DEBUG [RS:1;jenkins-hbase4:38251] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 18:14:35,485 DEBUG [RS:0;jenkins-hbase4:40615] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 18:14:35,485 DEBUG [RS:2;jenkins-hbase4:43775] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 18:14:35,485 DEBUG [RS:0;jenkins-hbase4:40615] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 18:14:35,492 DEBUG [RS:2;jenkins-hbase4:43775] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 18:14:35,492 DEBUG [RS:0;jenkins-hbase4:40615] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 18:14:35,493 DEBUG [RS:1;jenkins-hbase4:38251] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 18:14:35,495 DEBUG [RS:1;jenkins-hbase4:38251] zookeeper.ReadOnlyZKClient(139): Connect 0x48896ff7 to 127.0.0.1:61716 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:14:35,495 DEBUG [RS:0;jenkins-hbase4:40615] zookeeper.ReadOnlyZKClient(139): Connect 0x04d8e1ed to 127.0.0.1:61716 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:14:35,495 DEBUG [RS:2;jenkins-hbase4:43775] zookeeper.ReadOnlyZKClient(139): Connect 0x28fdd382 to 127.0.0.1:61716 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:14:35,511 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-19 18:14:35,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-19 18:14:35,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-19 18:14:35,532 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:14:35,533 DEBUG [RS:1;jenkins-hbase4:38251] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@464e9896, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:14:35,536 DEBUG [RS:1;jenkins-hbase4:38251] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42710487, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:14:35,539 DEBUG [RS:2;jenkins-hbase4:43775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@54bb25be, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:14:35,539 DEBUG [RS:2;jenkins-hbase4:43775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1f2281f5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:14:35,547 DEBUG [RS:0;jenkins-hbase4:40615] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@374415b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:14:35,547 DEBUG [RS:0;jenkins-hbase4:40615] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4f852bda, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:14:35,581 DEBUG [RS:0;jenkins-hbase4:40615] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:40615 2023-07-19 18:14:35,589 INFO [RS:0;jenkins-hbase4:40615] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 18:14:35,589 INFO [RS:0;jenkins-hbase4:40615] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 18:14:35,589 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:43775 2023-07-19 18:14:35,593 INFO [RS:2;jenkins-hbase4:43775] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 18:14:35,599 INFO [RS:2;jenkins-hbase4:43775] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 18:14:35,599 DEBUG [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 18:14:35,599 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 18:14:35,602 INFO [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46739,1689790471527 with isa=jenkins-hbase4.apache.org/172.31.14.131:40615, startcode=1689790473552 2023-07-19 18:14:35,602 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46739,1689790471527 with isa=jenkins-hbase4.apache.org/172.31.14.131:43775, startcode=1689790473982 2023-07-19 18:14:35,610 DEBUG [RS:1;jenkins-hbase4:38251] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:38251 2023-07-19 18:14:35,610 INFO [RS:1;jenkins-hbase4:38251] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 18:14:35,610 INFO [RS:1;jenkins-hbase4:38251] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 18:14:35,610 DEBUG [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 18:14:35,612 INFO [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46739,1689790471527 with isa=jenkins-hbase4.apache.org/172.31.14.131:38251, startcode=1689790473799 2023-07-19 18:14:35,636 DEBUG [RS:0;jenkins-hbase4:40615] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 18:14:35,639 DEBUG [RS:2;jenkins-hbase4:43775] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 18:14:35,638 DEBUG [RS:1;jenkins-hbase4:38251] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 18:14:35,652 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-19 18:14:35,730 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48907, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 18:14:35,730 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56573, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 18:14:35,730 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33701, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 18:14:35,748 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:35,754 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 18:14:35,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 18:14:35,760 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 18:14:35,760 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:35,760 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 18:14:35,763 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:35,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:14:35,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:14:35,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:14:35,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:14:35,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-19 18:14:35,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:35,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:14:35,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:35,786 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689790505786 2023-07-19 18:14:35,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-19 18:14:35,792 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 18:14:35,793 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-19 18:14:35,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-19 18:14:35,797 DEBUG [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(2830): Master is not running yet 2023-07-19 18:14:35,797 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(2830): Master is not running yet 2023-07-19 18:14:35,797 WARN [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-19 18:14:35,797 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 18:14:35,797 DEBUG [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(2830): Master is not running yet 2023-07-19 18:14:35,798 WARN [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-19 18:14:35,797 WARN [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-19 18:14:35,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-19 18:14:35,808 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-19 18:14:35,808 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-19 18:14:35,808 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-19 18:14:35,811 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:35,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-19 18:14:35,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-19 18:14:35,816 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-19 18:14:35,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-19 18:14:35,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-19 18:14:35,822 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689790475822,5,FailOnTimeoutGroup] 2023-07-19 18:14:35,826 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689790475823,5,FailOnTimeoutGroup] 2023-07-19 18:14:35,827 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:35,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-19 18:14:35,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:35,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:35,893 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 18:14:35,894 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 18:14:35,894 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475 2023-07-19 18:14:35,898 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46739,1689790471527 with isa=jenkins-hbase4.apache.org/172.31.14.131:43775, startcode=1689790473982 2023-07-19 18:14:35,898 INFO [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46739,1689790471527 with isa=jenkins-hbase4.apache.org/172.31.14.131:40615, startcode=1689790473552 2023-07-19 18:14:35,901 INFO [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46739,1689790471527 with isa=jenkins-hbase4.apache.org/172.31.14.131:38251, startcode=1689790473799 2023-07-19 18:14:35,907 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46739] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:35,909 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:14:35,919 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-19 18:14:35,920 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46739] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:35,921 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:14:35,922 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-19 18:14:35,922 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475 2023-07-19 18:14:35,922 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41243 2023-07-19 18:14:35,922 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42985 2023-07-19 18:14:35,924 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46739] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:35,925 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:14:35,925 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-19 18:14:35,927 DEBUG [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475 2023-07-19 18:14:35,927 DEBUG [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41243 2023-07-19 18:14:35,927 DEBUG [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42985 2023-07-19 18:14:35,928 DEBUG [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475 2023-07-19 18:14:35,928 DEBUG [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41243 2023-07-19 18:14:35,928 DEBUG [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42985 2023-07-19 18:14:35,940 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:14:35,942 DEBUG [RS:0;jenkins-hbase4:40615] zookeeper.ZKUtil(162): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:35,943 DEBUG [RS:1;jenkins-hbase4:38251] zookeeper.ZKUtil(162): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:35,943 WARN [RS:0;jenkins-hbase4:40615] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:14:35,943 INFO [RS:0;jenkins-hbase4:40615] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:14:35,943 DEBUG [RS:2;jenkins-hbase4:43775] zookeeper.ZKUtil(162): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:35,943 WARN [RS:1;jenkins-hbase4:38251] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:14:35,943 DEBUG [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:35,944 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43775,1689790473982] 2023-07-19 18:14:35,944 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38251,1689790473799] 2023-07-19 18:14:35,944 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40615,1689790473552] 2023-07-19 18:14:35,944 WARN [RS:2;jenkins-hbase4:43775] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:14:35,944 INFO [RS:1;jenkins-hbase4:38251] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:14:35,944 INFO [RS:2;jenkins-hbase4:43775] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:14:35,946 DEBUG [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:35,946 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:35,947 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:35,975 DEBUG [RS:1;jenkins-hbase4:38251] zookeeper.ZKUtil(162): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:35,976 DEBUG [RS:1;jenkins-hbase4:38251] zookeeper.ZKUtil(162): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:35,976 DEBUG [RS:0;jenkins-hbase4:40615] zookeeper.ZKUtil(162): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:35,977 DEBUG [RS:1;jenkins-hbase4:38251] zookeeper.ZKUtil(162): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:35,977 DEBUG [RS:0;jenkins-hbase4:40615] zookeeper.ZKUtil(162): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:35,977 DEBUG [RS:2;jenkins-hbase4:43775] zookeeper.ZKUtil(162): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:35,977 DEBUG [RS:0;jenkins-hbase4:40615] zookeeper.ZKUtil(162): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:35,978 DEBUG [RS:2;jenkins-hbase4:43775] zookeeper.ZKUtil(162): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:35,979 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 18:14:35,981 DEBUG [RS:2;jenkins-hbase4:43775] zookeeper.ZKUtil(162): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:35,987 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/info 2023-07-19 18:14:35,988 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 18:14:35,989 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:35,989 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 18:14:35,992 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/rep_barrier 2023-07-19 18:14:35,992 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 18:14:35,992 DEBUG [RS:0;jenkins-hbase4:40615] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 18:14:36,000 DEBUG [RS:1;jenkins-hbase4:38251] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 18:14:36,008 INFO [RS:2;jenkins-hbase4:43775] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 18:14:36,009 INFO [RS:0;jenkins-hbase4:40615] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 18:14:36,008 INFO [RS:1;jenkins-hbase4:38251] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 18:14:36,009 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 18:14:36,015 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:36,015 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 18:14:36,021 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/table 2023-07-19 18:14:36,021 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 18:14:36,023 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:36,025 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740 2023-07-19 18:14:36,027 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740 2023-07-19 18:14:36,031 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 18:14:36,035 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 18:14:36,052 INFO [RS:2;jenkins-hbase4:43775] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 18:14:36,053 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:36,054 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10893310560, jitterRate=0.014518603682518005}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 18:14:36,055 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 18:14:36,055 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 18:14:36,055 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 18:14:36,055 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 18:14:36,055 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 18:14:36,055 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 18:14:36,073 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 18:14:36,073 INFO [RS:0;jenkins-hbase4:40615] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 18:14:36,073 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 18:14:36,067 INFO [RS:1;jenkins-hbase4:38251] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 18:14:36,083 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 18:14:36,083 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-19 18:14:36,084 INFO [RS:2;jenkins-hbase4:43775] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 18:14:36,084 INFO [RS:0;jenkins-hbase4:40615] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 18:14:36,085 INFO [RS:2;jenkins-hbase4:43775] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,085 INFO [RS:0;jenkins-hbase4:40615] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,085 INFO [RS:1;jenkins-hbase4:38251] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 18:14:36,086 INFO [RS:1;jenkins-hbase4:38251] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,091 INFO [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 18:14:36,091 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 18:14:36,092 INFO [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 18:14:36,101 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-19 18:14:36,103 INFO [RS:2;jenkins-hbase4:43775] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,103 INFO [RS:0;jenkins-hbase4:40615] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,103 INFO [RS:1;jenkins-hbase4:38251] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,103 DEBUG [RS:0;jenkins-hbase4:40615] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,103 DEBUG [RS:0;jenkins-hbase4:40615] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,103 DEBUG [RS:2;jenkins-hbase4:43775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,103 DEBUG [RS:1;jenkins-hbase4:38251] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,104 DEBUG [RS:2;jenkins-hbase4:43775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,104 DEBUG [RS:1;jenkins-hbase4:38251] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,104 DEBUG [RS:2;jenkins-hbase4:43775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,104 DEBUG [RS:1;jenkins-hbase4:38251] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,104 DEBUG [RS:2;jenkins-hbase4:43775] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,104 DEBUG [RS:1;jenkins-hbase4:38251] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,104 DEBUG [RS:2;jenkins-hbase4:43775] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,104 DEBUG [RS:1;jenkins-hbase4:38251] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,104 DEBUG [RS:0;jenkins-hbase4:40615] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,105 DEBUG [RS:1;jenkins-hbase4:38251] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:14:36,105 DEBUG [RS:0;jenkins-hbase4:40615] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,105 DEBUG [RS:2;jenkins-hbase4:43775] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:14:36,105 DEBUG [RS:1;jenkins-hbase4:38251] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,105 DEBUG [RS:2;jenkins-hbase4:43775] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,105 DEBUG [RS:1;jenkins-hbase4:38251] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,105 DEBUG [RS:2;jenkins-hbase4:43775] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,105 DEBUG [RS:1;jenkins-hbase4:38251] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,105 DEBUG [RS:2;jenkins-hbase4:43775] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,105 DEBUG [RS:1;jenkins-hbase4:38251] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,105 DEBUG [RS:2;jenkins-hbase4:43775] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,105 DEBUG [RS:0;jenkins-hbase4:40615] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,106 DEBUG [RS:0;jenkins-hbase4:40615] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:14:36,106 DEBUG [RS:0;jenkins-hbase4:40615] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,106 DEBUG [RS:0;jenkins-hbase4:40615] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,106 DEBUG [RS:0;jenkins-hbase4:40615] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,106 DEBUG [RS:0;jenkins-hbase4:40615] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:36,120 INFO [RS:1;jenkins-hbase4:38251] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,121 INFO [RS:1;jenkins-hbase4:38251] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,121 INFO [RS:1;jenkins-hbase4:38251] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,133 INFO [RS:0;jenkins-hbase4:40615] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,133 INFO [RS:0;jenkins-hbase4:40615] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,133 INFO [RS:0;jenkins-hbase4:40615] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,134 INFO [RS:2;jenkins-hbase4:43775] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,134 INFO [RS:2;jenkins-hbase4:43775] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,134 INFO [RS:2;jenkins-hbase4:43775] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,143 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-19 18:14:36,153 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-19 18:14:36,157 INFO [RS:0;jenkins-hbase4:40615] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 18:14:36,157 INFO [RS:1;jenkins-hbase4:38251] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 18:14:36,158 INFO [RS:2;jenkins-hbase4:43775] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 18:14:36,161 INFO [RS:2;jenkins-hbase4:43775] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43775,1689790473982-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,161 INFO [RS:0;jenkins-hbase4:40615] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40615,1689790473552-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,161 INFO [RS:1;jenkins-hbase4:38251] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38251,1689790473799-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,184 INFO [RS:0;jenkins-hbase4:40615] regionserver.Replication(203): jenkins-hbase4.apache.org,40615,1689790473552 started 2023-07-19 18:14:36,184 INFO [RS:2;jenkins-hbase4:43775] regionserver.Replication(203): jenkins-hbase4.apache.org,43775,1689790473982 started 2023-07-19 18:14:36,184 INFO [RS:1;jenkins-hbase4:38251] regionserver.Replication(203): jenkins-hbase4.apache.org,38251,1689790473799 started 2023-07-19 18:14:36,184 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43775,1689790473982, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43775, sessionid=0x1017ecade2e0003 2023-07-19 18:14:36,184 INFO [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40615,1689790473552, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40615, sessionid=0x1017ecade2e0001 2023-07-19 18:14:36,184 INFO [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38251,1689790473799, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38251, sessionid=0x1017ecade2e0002 2023-07-19 18:14:36,185 DEBUG [RS:0;jenkins-hbase4:40615] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 18:14:36,185 DEBUG [RS:1;jenkins-hbase4:38251] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 18:14:36,185 DEBUG [RS:0;jenkins-hbase4:40615] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:36,185 DEBUG [RS:2;jenkins-hbase4:43775] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 18:14:36,185 DEBUG [RS:0;jenkins-hbase4:40615] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40615,1689790473552' 2023-07-19 18:14:36,185 DEBUG [RS:1;jenkins-hbase4:38251] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:36,186 DEBUG [RS:0;jenkins-hbase4:40615] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 18:14:36,186 DEBUG [RS:1;jenkins-hbase4:38251] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38251,1689790473799' 2023-07-19 18:14:36,185 DEBUG [RS:2;jenkins-hbase4:43775] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:36,187 DEBUG [RS:2;jenkins-hbase4:43775] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43775,1689790473982' 2023-07-19 18:14:36,187 DEBUG [RS:2;jenkins-hbase4:43775] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 18:14:36,186 DEBUG [RS:1;jenkins-hbase4:38251] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 18:14:36,187 DEBUG [RS:0;jenkins-hbase4:40615] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 18:14:36,187 DEBUG [RS:1;jenkins-hbase4:38251] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 18:14:36,187 DEBUG [RS:2;jenkins-hbase4:43775] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 18:14:36,188 DEBUG [RS:0;jenkins-hbase4:40615] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 18:14:36,188 DEBUG [RS:0;jenkins-hbase4:40615] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 18:14:36,188 DEBUG [RS:0;jenkins-hbase4:40615] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:36,188 DEBUG [RS:1;jenkins-hbase4:38251] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 18:14:36,188 DEBUG [RS:2;jenkins-hbase4:43775] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 18:14:36,188 DEBUG [RS:1;jenkins-hbase4:38251] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 18:14:36,188 DEBUG [RS:0;jenkins-hbase4:40615] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40615,1689790473552' 2023-07-19 18:14:36,189 DEBUG [RS:0;jenkins-hbase4:40615] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 18:14:36,188 DEBUG [RS:1;jenkins-hbase4:38251] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:36,188 DEBUG [RS:2;jenkins-hbase4:43775] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 18:14:36,189 DEBUG [RS:2;jenkins-hbase4:43775] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:36,189 DEBUG [RS:2;jenkins-hbase4:43775] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43775,1689790473982' 2023-07-19 18:14:36,189 DEBUG [RS:1;jenkins-hbase4:38251] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38251,1689790473799' 2023-07-19 18:14:36,189 DEBUG [RS:1;jenkins-hbase4:38251] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 18:14:36,189 DEBUG [RS:2;jenkins-hbase4:43775] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 18:14:36,189 DEBUG [RS:0;jenkins-hbase4:40615] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 18:14:36,190 DEBUG [RS:1;jenkins-hbase4:38251] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 18:14:36,190 DEBUG [RS:0;jenkins-hbase4:40615] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 18:14:36,190 DEBUG [RS:2;jenkins-hbase4:43775] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 18:14:36,190 INFO [RS:0;jenkins-hbase4:40615] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 18:14:36,190 INFO [RS:0;jenkins-hbase4:40615] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 18:14:36,190 DEBUG [RS:1;jenkins-hbase4:38251] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 18:14:36,190 INFO [RS:1;jenkins-hbase4:38251] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 18:14:36,191 INFO [RS:1;jenkins-hbase4:38251] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 18:14:36,191 DEBUG [RS:2;jenkins-hbase4:43775] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 18:14:36,191 INFO [RS:2;jenkins-hbase4:43775] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 18:14:36,191 INFO [RS:2;jenkins-hbase4:43775] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 18:14:36,304 INFO [RS:0;jenkins-hbase4:40615] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40615%2C1689790473552, suffix=, logDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,40615,1689790473552, archiveDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs, maxLogs=32 2023-07-19 18:14:36,305 INFO [RS:2;jenkins-hbase4:43775] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43775%2C1689790473982, suffix=, logDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,43775,1689790473982, archiveDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs, maxLogs=32 2023-07-19 18:14:36,305 DEBUG [jenkins-hbase4:46739] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-19 18:14:36,304 INFO [RS:1;jenkins-hbase4:38251] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38251%2C1689790473799, suffix=, logDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,38251,1689790473799, archiveDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs, maxLogs=32 2023-07-19 18:14:36,326 DEBUG [jenkins-hbase4:46739] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:36,332 DEBUG [jenkins-hbase4:46739] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:36,332 DEBUG [jenkins-hbase4:46739] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:36,332 DEBUG [jenkins-hbase4:46739] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:36,332 DEBUG [jenkins-hbase4:46739] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:36,344 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK] 2023-07-19 18:14:36,349 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK] 2023-07-19 18:14:36,349 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK] 2023-07-19 18:14:36,350 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK] 2023-07-19 18:14:36,350 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK] 2023-07-19 18:14:36,351 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40615,1689790473552, state=OPENING 2023-07-19 18:14:36,351 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK] 2023-07-19 18:14:36,368 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK] 2023-07-19 18:14:36,368 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK] 2023-07-19 18:14:36,369 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK] 2023-07-19 18:14:36,370 INFO [RS:2;jenkins-hbase4:43775] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,43775,1689790473982/jenkins-hbase4.apache.org%2C43775%2C1689790473982.1689790476311 2023-07-19 18:14:36,372 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-19 18:14:36,374 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:14:36,375 INFO [RS:1;jenkins-hbase4:38251] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,38251,1689790473799/jenkins-hbase4.apache.org%2C38251%2C1689790473799.1689790476311 2023-07-19 18:14:36,380 DEBUG [RS:2;jenkins-hbase4:43775] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK], DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK], DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK]] 2023-07-19 18:14:36,380 DEBUG [RS:1;jenkins-hbase4:38251] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK], DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK], DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK]] 2023-07-19 18:14:36,381 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 18:14:36,387 INFO [RS:0;jenkins-hbase4:40615] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,40615,1689790473552/jenkins-hbase4.apache.org%2C40615%2C1689790473552.1689790476311 2023-07-19 18:14:36,388 DEBUG [RS:0;jenkins-hbase4:40615] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK], DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK], DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK]] 2023-07-19 18:14:36,388 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:36,588 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:36,596 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:14:36,600 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32970, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:14:36,623 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 18:14:36,624 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:14:36,630 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40615%2C1689790473552.meta, suffix=.meta, logDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,40615,1689790473552, archiveDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs, maxLogs=32 2023-07-19 18:14:36,659 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK] 2023-07-19 18:14:36,660 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK] 2023-07-19 18:14:36,662 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK] 2023-07-19 18:14:36,673 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,40615,1689790473552/jenkins-hbase4.apache.org%2C40615%2C1689790473552.meta.1689790476632.meta 2023-07-19 18:14:36,674 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK], DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK], DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK]] 2023-07-19 18:14:36,675 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:36,677 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 18:14:36,681 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 18:14:36,683 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 18:14:36,692 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 18:14:36,692 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:36,692 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 18:14:36,692 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 18:14:36,696 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 18:14:36,698 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/info 2023-07-19 18:14:36,698 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/info 2023-07-19 18:14:36,699 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 18:14:36,700 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:36,700 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 18:14:36,702 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/rep_barrier 2023-07-19 18:14:36,702 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/rep_barrier 2023-07-19 18:14:36,703 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 18:14:36,704 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:36,704 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 18:14:36,705 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/table 2023-07-19 18:14:36,706 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/table 2023-07-19 18:14:36,706 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 18:14:36,707 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:36,710 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740 2023-07-19 18:14:36,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740 2023-07-19 18:14:36,727 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 18:14:36,729 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 18:14:36,733 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9989025440, jitterRate=-0.06969951093196869}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 18:14:36,733 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 18:14:36,747 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689790476575 2023-07-19 18:14:36,778 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 18:14:36,779 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 18:14:36,780 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40615,1689790473552, state=OPEN 2023-07-19 18:14:36,785 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 18:14:36,785 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 18:14:36,790 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-19 18:14:36,791 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40615,1689790473552 in 397 msec 2023-07-19 18:14:36,798 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-19 18:14:36,798 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 692 msec 2023-07-19 18:14:36,804 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.2710 sec 2023-07-19 18:14:36,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689790476804, completionTime=-1 2023-07-19 18:14:36,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-19 18:14:36,805 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-19 18:14:36,863 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46739,1689790471527] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:14:36,867 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32980, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:14:36,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-19 18:14:36,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689790536888 2023-07-19 18:14:36,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689790596888 2023-07-19 18:14:36,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 83 msec 2023-07-19 18:14:36,892 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46739,1689790471527] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:14:36,941 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46739,1689790471527] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-19 18:14:36,943 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-19 18:14:36,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46739,1689790471527-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46739,1689790471527-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46739,1689790471527-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,959 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:14:36,962 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:46739, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,963 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:14:36,963 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:36,975 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-19 18:14:36,980 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:36,983 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763 empty. 2023-07-19 18:14:36,984 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:36,984 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-19 18:14:36,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-19 18:14:36,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 18:14:36,996 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-19 18:14:37,004 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:14:37,008 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:14:37,019 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:37,024 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d empty. 2023-07-19 18:14:37,032 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:37,032 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-19 18:14:37,058 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-19 18:14:37,061 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => d86f944363fe6bb7338c25a127959763, NAME => 'hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:37,117 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-19 18:14:37,120 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9ea4dee563e7f0f7a6c584dc1c5c929d, NAME => 'hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:37,151 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:37,151 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing d86f944363fe6bb7338c25a127959763, disabling compactions & flushes 2023-07-19 18:14:37,151 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:37,151 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:37,151 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. after waiting 0 ms 2023-07-19 18:14:37,151 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:37,151 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:37,151 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for d86f944363fe6bb7338c25a127959763: 2023-07-19 18:14:37,166 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:14:37,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:37,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 9ea4dee563e7f0f7a6c584dc1c5c929d, disabling compactions & flushes 2023-07-19 18:14:37,170 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:37,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:37,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. after waiting 0 ms 2023-07-19 18:14:37,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:37,170 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:37,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 9ea4dee563e7f0f7a6c584dc1c5c929d: 2023-07-19 18:14:37,175 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:14:37,188 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790477176"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790477176"}]},"ts":"1689790477176"} 2023-07-19 18:14:37,188 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790477171"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790477171"}]},"ts":"1689790477171"} 2023-07-19 18:14:37,221 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:14:37,227 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:14:37,227 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:14:37,228 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:14:37,233 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790477227"}]},"ts":"1689790477227"} 2023-07-19 18:14:37,233 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790477228"}]},"ts":"1689790477228"} 2023-07-19 18:14:37,237 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-19 18:14:37,239 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-19 18:14:37,242 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:37,242 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:37,242 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:37,242 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:37,242 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:37,244 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d86f944363fe6bb7338c25a127959763, ASSIGN}] 2023-07-19 18:14:37,246 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:37,247 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:37,247 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:37,247 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:37,247 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:37,247 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9ea4dee563e7f0f7a6c584dc1c5c929d, ASSIGN}] 2023-07-19 18:14:37,248 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d86f944363fe6bb7338c25a127959763, ASSIGN 2023-07-19 18:14:37,250 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=d86f944363fe6bb7338c25a127959763, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38251,1689790473799; forceNewPlan=false, retain=false 2023-07-19 18:14:37,251 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9ea4dee563e7f0f7a6c584dc1c5c929d, ASSIGN 2023-07-19 18:14:37,253 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=9ea4dee563e7f0f7a6c584dc1c5c929d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40615,1689790473552; forceNewPlan=false, retain=false 2023-07-19 18:14:37,254 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-19 18:14:37,256 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=d86f944363fe6bb7338c25a127959763, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:37,257 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790477256"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790477256"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790477256"}]},"ts":"1689790477256"} 2023-07-19 18:14:37,257 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=9ea4dee563e7f0f7a6c584dc1c5c929d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:37,257 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790477257"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790477257"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790477257"}]},"ts":"1689790477257"} 2023-07-19 18:14:37,260 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure d86f944363fe6bb7338c25a127959763, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:37,262 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 9ea4dee563e7f0f7a6c584dc1c5c929d, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:37,417 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:37,417 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:14:37,421 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45964, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:14:37,427 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:37,428 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d86f944363fe6bb7338c25a127959763, NAME => 'hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:37,428 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 18:14:37,428 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:37,428 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. service=MultiRowMutationService 2023-07-19 18:14:37,429 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9ea4dee563e7f0f7a6c584dc1c5c929d, NAME => 'hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:37,429 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 18:14:37,430 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:37,430 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:37,430 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:37,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:37,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:37,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:37,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:37,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:37,435 INFO [StoreOpener-d86f944363fe6bb7338c25a127959763-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:37,435 INFO [StoreOpener-9ea4dee563e7f0f7a6c584dc1c5c929d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:37,438 DEBUG [StoreOpener-9ea4dee563e7f0f7a6c584dc1c5c929d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/info 2023-07-19 18:14:37,438 DEBUG [StoreOpener-9ea4dee563e7f0f7a6c584dc1c5c929d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/info 2023-07-19 18:14:37,439 DEBUG [StoreOpener-d86f944363fe6bb7338c25a127959763-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/m 2023-07-19 18:14:37,439 DEBUG [StoreOpener-d86f944363fe6bb7338c25a127959763-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/m 2023-07-19 18:14:37,439 INFO [StoreOpener-9ea4dee563e7f0f7a6c584dc1c5c929d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9ea4dee563e7f0f7a6c584dc1c5c929d columnFamilyName info 2023-07-19 18:14:37,439 INFO [StoreOpener-d86f944363fe6bb7338c25a127959763-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d86f944363fe6bb7338c25a127959763 columnFamilyName m 2023-07-19 18:14:37,440 INFO [StoreOpener-d86f944363fe6bb7338c25a127959763-1] regionserver.HStore(310): Store=d86f944363fe6bb7338c25a127959763/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:37,440 INFO [StoreOpener-9ea4dee563e7f0f7a6c584dc1c5c929d-1] regionserver.HStore(310): Store=9ea4dee563e7f0f7a6c584dc1c5c929d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:37,443 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:37,443 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:37,444 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:37,445 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:37,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:37,450 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:37,454 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:37,454 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:37,455 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d86f944363fe6bb7338c25a127959763; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4d810791, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:37,455 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9ea4dee563e7f0f7a6c584dc1c5c929d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10152340320, jitterRate=-0.0544896274805069}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:37,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d86f944363fe6bb7338c25a127959763: 2023-07-19 18:14:37,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9ea4dee563e7f0f7a6c584dc1c5c929d: 2023-07-19 18:14:37,458 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d., pid=9, masterSystemTime=1689790477417 2023-07-19 18:14:37,458 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763., pid=8, masterSystemTime=1689790477417 2023-07-19 18:14:37,465 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:37,466 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:37,469 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=d86f944363fe6bb7338c25a127959763, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:37,469 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790477468"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790477468"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790477468"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790477468"}]},"ts":"1689790477468"} 2023-07-19 18:14:37,470 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:37,471 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:37,471 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=9ea4dee563e7f0f7a6c584dc1c5c929d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:37,473 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790477470"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790477470"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790477470"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790477470"}]},"ts":"1689790477470"} 2023-07-19 18:14:37,480 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-19 18:14:37,480 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure d86f944363fe6bb7338c25a127959763, server=jenkins-hbase4.apache.org,38251,1689790473799 in 214 msec 2023-07-19 18:14:37,486 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-19 18:14:37,489 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-19 18:14:37,490 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=d86f944363fe6bb7338c25a127959763, ASSIGN in 236 msec 2023-07-19 18:14:37,491 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 9ea4dee563e7f0f7a6c584dc1c5c929d, server=jenkins-hbase4.apache.org,40615,1689790473552 in 218 msec 2023-07-19 18:14:37,492 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:14:37,492 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790477492"}]},"ts":"1689790477492"} 2023-07-19 18:14:37,499 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-19 18:14:37,500 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-19 18:14:37,500 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=9ea4dee563e7f0f7a6c584dc1c5c929d, ASSIGN in 242 msec 2023-07-19 18:14:37,502 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:14:37,502 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790477502"}]},"ts":"1689790477502"} 2023-07-19 18:14:37,505 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-19 18:14:37,506 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:14:37,510 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:14:37,511 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 604 msec 2023-07-19 18:14:37,515 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 519 msec 2023-07-19 18:14:37,599 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46739,1689790471527] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:14:37,600 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-19 18:14:37,602 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:14:37,602 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:14:37,658 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45968, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:14:37,665 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-19 18:14:37,665 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-19 18:14:37,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-19 18:14:37,707 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:14:37,722 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 82 msec 2023-07-19 18:14:37,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-19 18:14:37,758 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:14:37,767 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 35 msec 2023-07-19 18:14:37,772 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:14:37,772 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:37,775 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-19 18:14:37,779 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 18:14:37,782 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-19 18:14:37,782 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.559sec 2023-07-19 18:14:37,786 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-19 18:14:37,786 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-19 18:14:37,787 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-19 18:14:37,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-19 18:14:37,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46739,1689790471527-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-19 18:14:37,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46739,1689790471527-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-19 18:14:37,807 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-19 18:14:37,882 DEBUG [Listener at localhost/46039] zookeeper.ReadOnlyZKClient(139): Connect 0x65b017d0 to 127.0.0.1:61716 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:14:37,891 DEBUG [Listener at localhost/46039] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42116de3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:14:37,915 DEBUG [hconnection-0xbd8ecb1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:14:37,930 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32984, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:14:37,944 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,46739,1689790471527 2023-07-19 18:14:37,946 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:37,957 DEBUG [Listener at localhost/46039] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-19 18:14:37,961 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51588, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-19 18:14:37,977 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-19 18:14:37,977 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:14:37,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-19 18:14:37,983 DEBUG [Listener at localhost/46039] zookeeper.ReadOnlyZKClient(139): Connect 0x3072cb01 to 127.0.0.1:61716 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:14:37,989 DEBUG [Listener at localhost/46039] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1792a4f4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:14:37,990 INFO [Listener at localhost/46039] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:61716 2023-07-19 18:14:37,993 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:14:38,000 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017ecade2e000a connected 2023-07-19 18:14:38,065 INFO [Listener at localhost/46039] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=425, OpenFileDescriptor=683, MaxFileDescriptor=60000, SystemLoadAverage=479, ProcessCount=173, AvailableMemoryMB=3611 2023-07-19 18:14:38,071 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-19 18:14:38,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:38,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:38,166 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-19 18:14:38,180 INFO [Listener at localhost/46039] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:14:38,180 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:38,180 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:38,180 INFO [Listener at localhost/46039] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:14:38,180 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:14:38,180 INFO [Listener at localhost/46039] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:14:38,181 INFO [Listener at localhost/46039] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:14:38,185 INFO [Listener at localhost/46039] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38419 2023-07-19 18:14:38,186 INFO [Listener at localhost/46039] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 18:14:38,187 DEBUG [Listener at localhost/46039] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 18:14:38,189 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:38,194 INFO [Listener at localhost/46039] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:14:38,197 INFO [Listener at localhost/46039] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38419 connecting to ZooKeeper ensemble=127.0.0.1:61716 2023-07-19 18:14:38,203 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:384190x0, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:14:38,204 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(162): regionserver:384190x0, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 18:14:38,205 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(162): regionserver:384190x0, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-19 18:14:38,206 DEBUG [Listener at localhost/46039] zookeeper.ZKUtil(164): regionserver:384190x0, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:14:38,211 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38419-0x1017ecade2e000b connected 2023-07-19 18:14:38,213 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38419 2023-07-19 18:14:38,214 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38419 2023-07-19 18:14:38,214 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38419 2023-07-19 18:14:38,215 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38419 2023-07-19 18:14:38,216 DEBUG [Listener at localhost/46039] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38419 2023-07-19 18:14:38,218 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:14:38,218 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:14:38,218 INFO [Listener at localhost/46039] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:14:38,219 INFO [Listener at localhost/46039] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 18:14:38,219 INFO [Listener at localhost/46039] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:14:38,219 INFO [Listener at localhost/46039] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:14:38,219 INFO [Listener at localhost/46039] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:14:38,220 INFO [Listener at localhost/46039] http.HttpServer(1146): Jetty bound to port 46573 2023-07-19 18:14:38,220 INFO [Listener at localhost/46039] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:14:38,222 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:38,222 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@14ac5f55{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:14:38,222 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:38,223 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34f7812e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:14:38,363 INFO [Listener at localhost/46039] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:14:38,364 INFO [Listener at localhost/46039] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:14:38,365 INFO [Listener at localhost/46039] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:14:38,365 INFO [Listener at localhost/46039] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 18:14:38,379 INFO [Listener at localhost/46039] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:14:38,381 INFO [Listener at localhost/46039] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@289fa920{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/java.io.tmpdir/jetty-0_0_0_0-46573-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1428427313766432969/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:14:38,383 INFO [Listener at localhost/46039] server.AbstractConnector(333): Started ServerConnector@35efd609{HTTP/1.1, (http/1.1)}{0.0.0.0:46573} 2023-07-19 18:14:38,383 INFO [Listener at localhost/46039] server.Server(415): Started @12091ms 2023-07-19 18:14:38,389 INFO [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(951): ClusterId : 29f10742-1ec2-44d0-8ada-95f2b5b24d94 2023-07-19 18:14:38,390 DEBUG [RS:3;jenkins-hbase4:38419] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 18:14:38,393 DEBUG [RS:3;jenkins-hbase4:38419] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 18:14:38,393 DEBUG [RS:3;jenkins-hbase4:38419] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 18:14:38,396 DEBUG [RS:3;jenkins-hbase4:38419] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 18:14:38,399 DEBUG [RS:3;jenkins-hbase4:38419] zookeeper.ReadOnlyZKClient(139): Connect 0x714d7cbd to 127.0.0.1:61716 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:14:38,411 DEBUG [RS:3;jenkins-hbase4:38419] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f8a8ce4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:14:38,411 DEBUG [RS:3;jenkins-hbase4:38419] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7bb517cd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:14:38,424 DEBUG [RS:3;jenkins-hbase4:38419] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:38419 2023-07-19 18:14:38,424 INFO [RS:3;jenkins-hbase4:38419] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 18:14:38,424 INFO [RS:3;jenkins-hbase4:38419] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 18:14:38,424 DEBUG [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 18:14:38,425 INFO [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46739,1689790471527 with isa=jenkins-hbase4.apache.org/172.31.14.131:38419, startcode=1689790478179 2023-07-19 18:14:38,425 DEBUG [RS:3;jenkins-hbase4:38419] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 18:14:38,432 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34777, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 18:14:38,432 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46739] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:38,433 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:14:38,433 DEBUG [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475 2023-07-19 18:14:38,433 DEBUG [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41243 2023-07-19 18:14:38,433 DEBUG [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42985 2023-07-19 18:14:38,440 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:14:38,441 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:38,442 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:14:38,442 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:38,442 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:14:38,443 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:38,443 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:38,443 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:14:38,443 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:38,443 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:38,443 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:38,445 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 18:14:38,446 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38419,1689790478179] 2023-07-19 18:14:38,450 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:38,450 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:38,450 DEBUG [RS:3;jenkins-hbase4:38419] zookeeper.ZKUtil(162): regionserver:38419-0x1017ecade2e000b, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:38,451 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:38,451 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46739,1689790471527] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-19 18:14:38,451 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:38,451 WARN [RS:3;jenkins-hbase4:38419] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:14:38,452 INFO [RS:3;jenkins-hbase4:38419] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:14:38,452 DEBUG [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:38,452 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:38,452 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:38,461 DEBUG [RS:3;jenkins-hbase4:38419] zookeeper.ZKUtil(162): regionserver:38419-0x1017ecade2e000b, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:38,461 DEBUG [RS:3;jenkins-hbase4:38419] zookeeper.ZKUtil(162): regionserver:38419-0x1017ecade2e000b, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:38,462 DEBUG [RS:3;jenkins-hbase4:38419] zookeeper.ZKUtil(162): regionserver:38419-0x1017ecade2e000b, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:38,462 DEBUG [RS:3;jenkins-hbase4:38419] zookeeper.ZKUtil(162): regionserver:38419-0x1017ecade2e000b, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:38,464 DEBUG [RS:3;jenkins-hbase4:38419] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 18:14:38,464 INFO [RS:3;jenkins-hbase4:38419] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 18:14:38,468 INFO [RS:3;jenkins-hbase4:38419] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 18:14:38,471 INFO [RS:3;jenkins-hbase4:38419] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 18:14:38,471 INFO [RS:3;jenkins-hbase4:38419] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:38,472 INFO [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 18:14:38,475 INFO [RS:3;jenkins-hbase4:38419] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:38,475 DEBUG [RS:3;jenkins-hbase4:38419] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:38,475 DEBUG [RS:3;jenkins-hbase4:38419] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:38,475 DEBUG [RS:3;jenkins-hbase4:38419] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:38,475 DEBUG [RS:3;jenkins-hbase4:38419] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:38,476 DEBUG [RS:3;jenkins-hbase4:38419] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:38,476 DEBUG [RS:3;jenkins-hbase4:38419] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:14:38,476 DEBUG [RS:3;jenkins-hbase4:38419] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:38,476 DEBUG [RS:3;jenkins-hbase4:38419] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:38,476 DEBUG [RS:3;jenkins-hbase4:38419] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:38,476 DEBUG [RS:3;jenkins-hbase4:38419] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:14:38,479 INFO [RS:3;jenkins-hbase4:38419] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:38,479 INFO [RS:3;jenkins-hbase4:38419] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:38,480 INFO [RS:3;jenkins-hbase4:38419] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:38,496 INFO [RS:3;jenkins-hbase4:38419] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 18:14:38,496 INFO [RS:3;jenkins-hbase4:38419] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38419,1689790478179-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:14:38,510 INFO [RS:3;jenkins-hbase4:38419] regionserver.Replication(203): jenkins-hbase4.apache.org,38419,1689790478179 started 2023-07-19 18:14:38,510 INFO [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38419,1689790478179, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38419, sessionid=0x1017ecade2e000b 2023-07-19 18:14:38,510 DEBUG [RS:3;jenkins-hbase4:38419] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 18:14:38,510 DEBUG [RS:3;jenkins-hbase4:38419] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:38,510 DEBUG [RS:3;jenkins-hbase4:38419] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38419,1689790478179' 2023-07-19 18:14:38,510 DEBUG [RS:3;jenkins-hbase4:38419] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 18:14:38,511 DEBUG [RS:3;jenkins-hbase4:38419] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 18:14:38,511 DEBUG [RS:3;jenkins-hbase4:38419] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 18:14:38,512 DEBUG [RS:3;jenkins-hbase4:38419] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 18:14:38,512 DEBUG [RS:3;jenkins-hbase4:38419] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:38,512 DEBUG [RS:3;jenkins-hbase4:38419] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38419,1689790478179' 2023-07-19 18:14:38,512 DEBUG [RS:3;jenkins-hbase4:38419] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 18:14:38,512 DEBUG [RS:3;jenkins-hbase4:38419] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 18:14:38,513 DEBUG [RS:3;jenkins-hbase4:38419] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 18:14:38,513 INFO [RS:3;jenkins-hbase4:38419] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 18:14:38,513 INFO [RS:3;jenkins-hbase4:38419] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 18:14:38,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:38,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:38,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:38,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:38,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:38,535 DEBUG [hconnection-0x22934466-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:14:38,541 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32988, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:14:38,546 DEBUG [hconnection-0x22934466-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:14:38,549 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45980, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:14:38,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:38,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:38,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:14:38,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:38,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:51588 deadline: 1689791678562, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:14:38,565 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:14:38,571 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:38,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:38,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:38,574 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:14:38,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:38,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:38,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:38,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:38,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:38,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:38,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:38,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:38,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:38,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:38,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:38,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:38,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:38251] to rsgroup Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:38,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:38,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:38,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:38,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:38,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(238): Moving server region d86f944363fe6bb7338c25a127959763, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:38,617 INFO [RS:3;jenkins-hbase4:38419] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38419%2C1689790478179, suffix=, logDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,38419,1689790478179, archiveDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs, maxLogs=32 2023-07-19 18:14:38,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=d86f944363fe6bb7338c25a127959763, REOPEN/MOVE 2023-07-19 18:14:38,619 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=d86f944363fe6bb7338c25a127959763, REOPEN/MOVE 2023-07-19 18:14:38,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-19 18:14:38,621 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=d86f944363fe6bb7338c25a127959763, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:38,621 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790478620"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790478620"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790478620"}]},"ts":"1689790478620"} 2023-07-19 18:14:38,624 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure d86f944363fe6bb7338c25a127959763, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:38,645 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK] 2023-07-19 18:14:38,645 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK] 2023-07-19 18:14:38,646 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK] 2023-07-19 18:14:38,649 INFO [RS:3;jenkins-hbase4:38419] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,38419,1689790478179/jenkins-hbase4.apache.org%2C38419%2C1689790478179.1689790478619 2023-07-19 18:14:38,649 DEBUG [RS:3;jenkins-hbase4:38419] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK], DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK], DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK]] 2023-07-19 18:14:38,787 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:38,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d86f944363fe6bb7338c25a127959763, disabling compactions & flushes 2023-07-19 18:14:38,788 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:38,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:38,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. after waiting 0 ms 2023-07-19 18:14:38,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:38,789 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing d86f944363fe6bb7338c25a127959763 1/1 column families, dataSize=1.38 KB heapSize=2.36 KB 2023-07-19 18:14:38,870 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/.tmp/m/45a91ef889b04a48a49167c4d3592f07 2023-07-19 18:14:38,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/.tmp/m/45a91ef889b04a48a49167c4d3592f07 as hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/m/45a91ef889b04a48a49167c4d3592f07 2023-07-19 18:14:38,931 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/m/45a91ef889b04a48a49167c4d3592f07, entries=3, sequenceid=9, filesize=5.2 K 2023-07-19 18:14:38,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1414, heapSize ~2.34 KB/2400, currentSize=0 B/0 for d86f944363fe6bb7338c25a127959763 in 146ms, sequenceid=9, compaction requested=false 2023-07-19 18:14:38,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-19 18:14:38,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-19 18:14:38,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 18:14:38,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:38,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d86f944363fe6bb7338c25a127959763: 2023-07-19 18:14:38,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d86f944363fe6bb7338c25a127959763 move to jenkins-hbase4.apache.org,43775,1689790473982 record at close sequenceid=9 2023-07-19 18:14:38,954 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:38,955 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=d86f944363fe6bb7338c25a127959763, regionState=CLOSED 2023-07-19 18:14:38,955 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790478954"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790478954"}]},"ts":"1689790478954"} 2023-07-19 18:14:38,960 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-19 18:14:38,960 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure d86f944363fe6bb7338c25a127959763, server=jenkins-hbase4.apache.org,38251,1689790473799 in 333 msec 2023-07-19 18:14:38,962 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=d86f944363fe6bb7338c25a127959763, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:14:39,112 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 18:14:39,112 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=d86f944363fe6bb7338c25a127959763, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:39,113 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790479112"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790479112"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790479112"}]},"ts":"1689790479112"} 2023-07-19 18:14:39,116 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure d86f944363fe6bb7338c25a127959763, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:39,270 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:39,270 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:14:39,272 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50776, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:14:39,279 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:39,279 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d86f944363fe6bb7338c25a127959763, NAME => 'hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:39,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 18:14:39,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. service=MultiRowMutationService 2023-07-19 18:14:39,280 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 18:14:39,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:39,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:39,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:39,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:39,283 INFO [StoreOpener-d86f944363fe6bb7338c25a127959763-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:39,285 DEBUG [StoreOpener-d86f944363fe6bb7338c25a127959763-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/m 2023-07-19 18:14:39,285 DEBUG [StoreOpener-d86f944363fe6bb7338c25a127959763-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/m 2023-07-19 18:14:39,285 INFO [StoreOpener-d86f944363fe6bb7338c25a127959763-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d86f944363fe6bb7338c25a127959763 columnFamilyName m 2023-07-19 18:14:39,301 DEBUG [StoreOpener-d86f944363fe6bb7338c25a127959763-1] regionserver.HStore(539): loaded hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/m/45a91ef889b04a48a49167c4d3592f07 2023-07-19 18:14:39,302 INFO [StoreOpener-d86f944363fe6bb7338c25a127959763-1] regionserver.HStore(310): Store=d86f944363fe6bb7338c25a127959763/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:39,304 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:39,307 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:39,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d86f944363fe6bb7338c25a127959763 2023-07-19 18:14:39,314 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d86f944363fe6bb7338c25a127959763; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4735af51, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:39,314 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d86f944363fe6bb7338c25a127959763: 2023-07-19 18:14:39,321 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763., pid=14, masterSystemTime=1689790479270 2023-07-19 18:14:39,326 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:39,327 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:14:39,328 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=d86f944363fe6bb7338c25a127959763, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:39,328 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790479328"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790479328"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790479328"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790479328"}]},"ts":"1689790479328"} 2023-07-19 18:14:39,335 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-19 18:14:39,335 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure d86f944363fe6bb7338c25a127959763, server=jenkins-hbase4.apache.org,43775,1689790473982 in 216 msec 2023-07-19 18:14:39,337 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=d86f944363fe6bb7338c25a127959763, REOPEN/MOVE in 719 msec 2023-07-19 18:14:39,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-19 18:14:39,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799, jenkins-hbase4.apache.org,38419,1689790478179] are moved back to default 2023-07-19 18:14:39,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:39,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:39,622 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38251] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:45980 deadline: 1689790539622, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43775 startCode=1689790473982. As of locationSeqNum=9. 2023-07-19 18:14:39,730 DEBUG [hconnection-0x22934466-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:14:39,740 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50778, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:14:39,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:39,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:39,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:39,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:39,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:14:39,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:39,779 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:14:39,782 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38251] ipc.CallRunner(144): callId: 42 service: ClientService methodName: ExecService size: 619 connection: 172.31.14.131:45968 deadline: 1689790539781, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43775 startCode=1689790473982. As of locationSeqNum=9. 2023-07-19 18:14:39,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-19 18:14:39,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-19 18:14:39,888 DEBUG [PEWorker-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:14:39,889 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50790, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:14:39,892 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:39,892 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:39,893 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:39,893 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:39,899 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:14:39,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-19 18:14:39,905 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:39,905 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:39,905 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:39,905 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:39,905 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:39,906 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26 empty. 2023-07-19 18:14:39,906 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e empty. 2023-07-19 18:14:39,907 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4 empty. 2023-07-19 18:14:39,907 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7 empty. 2023-07-19 18:14:39,907 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0 empty. 2023-07-19 18:14:39,907 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:39,907 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:39,907 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:39,907 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:39,908 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:39,908 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 18:14:39,939 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-19 18:14:39,940 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 58fc4e90003c8b08ddc8335792cf7ba4, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:39,941 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9616664d7fc62c86fccfdcd29b92ba26, NAME => 'Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:39,941 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 317e6ec6805082da0b86b5c9e86ab70e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:40,010 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:40,011 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 317e6ec6805082da0b86b5c9e86ab70e, disabling compactions & flushes 2023-07-19 18:14:40,011 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:40,011 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:40,011 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. after waiting 0 ms 2023-07-19 18:14:40,011 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:40,011 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:40,011 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 317e6ec6805082da0b86b5c9e86ab70e: 2023-07-19 18:14:40,012 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 09e4b50ce967513aa4fb462fc4309af0, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:40,013 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:40,014 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 9616664d7fc62c86fccfdcd29b92ba26, disabling compactions & flushes 2023-07-19 18:14:40,014 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:40,014 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:40,014 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. after waiting 0 ms 2023-07-19 18:14:40,014 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:40,014 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:40,014 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 9616664d7fc62c86fccfdcd29b92ba26: 2023-07-19 18:14:40,015 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => dd4249aba95ef691c99a6dfc932a11e7, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:40,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:40,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 58fc4e90003c8b08ddc8335792cf7ba4, disabling compactions & flushes 2023-07-19 18:14:40,016 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:40,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:40,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. after waiting 0 ms 2023-07-19 18:14:40,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:40,016 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:40,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 58fc4e90003c8b08ddc8335792cf7ba4: 2023-07-19 18:14:40,052 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:40,053 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 09e4b50ce967513aa4fb462fc4309af0, disabling compactions & flushes 2023-07-19 18:14:40,053 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:40,053 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:40,053 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. after waiting 0 ms 2023-07-19 18:14:40,053 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:40,053 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:40,053 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 09e4b50ce967513aa4fb462fc4309af0: 2023-07-19 18:14:40,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:40,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing dd4249aba95ef691c99a6dfc932a11e7, disabling compactions & flushes 2023-07-19 18:14:40,057 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:40,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:40,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. after waiting 0 ms 2023-07-19 18:14:40,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:40,057 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:40,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for dd4249aba95ef691c99a6dfc932a11e7: 2023-07-19 18:14:40,062 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:14:40,063 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790480063"}]},"ts":"1689790480063"} 2023-07-19 18:14:40,063 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790480063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790480063"}]},"ts":"1689790480063"} 2023-07-19 18:14:40,064 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790480063"}]},"ts":"1689790480063"} 2023-07-19 18:14:40,064 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790480063"}]},"ts":"1689790480063"} 2023-07-19 18:14:40,064 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790480063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790480063"}]},"ts":"1689790480063"} 2023-07-19 18:14:40,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-19 18:14:40,113 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-19 18:14:40,115 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:14:40,116 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790480115"}]},"ts":"1689790480115"} 2023-07-19 18:14:40,118 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-19 18:14:40,127 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:40,128 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:40,128 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:40,128 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:40,128 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9616664d7fc62c86fccfdcd29b92ba26, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=317e6ec6805082da0b86b5c9e86ab70e, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58fc4e90003c8b08ddc8335792cf7ba4, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09e4b50ce967513aa4fb462fc4309af0, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd4249aba95ef691c99a6dfc932a11e7, ASSIGN}] 2023-07-19 18:14:40,131 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=317e6ec6805082da0b86b5c9e86ab70e, ASSIGN 2023-07-19 18:14:40,132 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09e4b50ce967513aa4fb462fc4309af0, ASSIGN 2023-07-19 18:14:40,132 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58fc4e90003c8b08ddc8335792cf7ba4, ASSIGN 2023-07-19 18:14:40,133 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9616664d7fc62c86fccfdcd29b92ba26, ASSIGN 2023-07-19 18:14:40,134 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=317e6ec6805082da0b86b5c9e86ab70e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40615,1689790473552; forceNewPlan=false, retain=false 2023-07-19 18:14:40,134 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09e4b50ce967513aa4fb462fc4309af0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:14:40,135 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58fc4e90003c8b08ddc8335792cf7ba4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40615,1689790473552; forceNewPlan=false, retain=false 2023-07-19 18:14:40,135 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd4249aba95ef691c99a6dfc932a11e7, ASSIGN 2023-07-19 18:14:40,135 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9616664d7fc62c86fccfdcd29b92ba26, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:14:40,136 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd4249aba95ef691c99a6dfc932a11e7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40615,1689790473552; forceNewPlan=false, retain=false 2023-07-19 18:14:40,284 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 18:14:40,287 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=9616664d7fc62c86fccfdcd29b92ba26, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:40,287 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=09e4b50ce967513aa4fb462fc4309af0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:40,287 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=317e6ec6805082da0b86b5c9e86ab70e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:40,287 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=dd4249aba95ef691c99a6dfc932a11e7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:40,288 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480287"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790480287"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790480287"}]},"ts":"1689790480287"} 2023-07-19 18:14:40,288 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480287"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790480287"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790480287"}]},"ts":"1689790480287"} 2023-07-19 18:14:40,288 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790480287"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790480287"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790480287"}]},"ts":"1689790480287"} 2023-07-19 18:14:40,288 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790480287"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790480287"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790480287"}]},"ts":"1689790480287"} 2023-07-19 18:14:40,287 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=58fc4e90003c8b08ddc8335792cf7ba4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:40,288 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480287"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790480287"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790480287"}]},"ts":"1689790480287"} 2023-07-19 18:14:40,291 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=19, state=RUNNABLE; OpenRegionProcedure 09e4b50ce967513aa4fb462fc4309af0, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:40,292 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=20, state=RUNNABLE; OpenRegionProcedure dd4249aba95ef691c99a6dfc932a11e7, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:40,294 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=17, state=RUNNABLE; OpenRegionProcedure 317e6ec6805082da0b86b5c9e86ab70e, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:40,296 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=16, state=RUNNABLE; OpenRegionProcedure 9616664d7fc62c86fccfdcd29b92ba26, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:40,297 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=18, state=RUNNABLE; OpenRegionProcedure 58fc4e90003c8b08ddc8335792cf7ba4, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:40,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-19 18:14:40,449 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:40,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9616664d7fc62c86fccfdcd29b92ba26, NAME => 'Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 18:14:40,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:40,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:40,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:40,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:40,450 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:40,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 58fc4e90003c8b08ddc8335792cf7ba4, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 18:14:40,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:40,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:40,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:40,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:40,452 INFO [StoreOpener-9616664d7fc62c86fccfdcd29b92ba26-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:40,454 DEBUG [StoreOpener-9616664d7fc62c86fccfdcd29b92ba26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26/f 2023-07-19 18:14:40,454 DEBUG [StoreOpener-9616664d7fc62c86fccfdcd29b92ba26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26/f 2023-07-19 18:14:40,454 INFO [StoreOpener-9616664d7fc62c86fccfdcd29b92ba26-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9616664d7fc62c86fccfdcd29b92ba26 columnFamilyName f 2023-07-19 18:14:40,455 INFO [StoreOpener-58fc4e90003c8b08ddc8335792cf7ba4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:40,455 INFO [StoreOpener-9616664d7fc62c86fccfdcd29b92ba26-1] regionserver.HStore(310): Store=9616664d7fc62c86fccfdcd29b92ba26/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:40,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:40,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:40,459 DEBUG [StoreOpener-58fc4e90003c8b08ddc8335792cf7ba4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4/f 2023-07-19 18:14:40,459 DEBUG [StoreOpener-58fc4e90003c8b08ddc8335792cf7ba4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4/f 2023-07-19 18:14:40,463 INFO [StoreOpener-58fc4e90003c8b08ddc8335792cf7ba4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 58fc4e90003c8b08ddc8335792cf7ba4 columnFamilyName f 2023-07-19 18:14:40,463 INFO [StoreOpener-58fc4e90003c8b08ddc8335792cf7ba4-1] regionserver.HStore(310): Store=58fc4e90003c8b08ddc8335792cf7ba4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:40,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:40,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:40,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:40,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:40,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:40,471 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9616664d7fc62c86fccfdcd29b92ba26; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11520377920, jitterRate=0.07291880249977112}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:40,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9616664d7fc62c86fccfdcd29b92ba26: 2023-07-19 18:14:40,472 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26., pid=24, masterSystemTime=1689790480444 2023-07-19 18:14:40,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:40,474 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 58fc4e90003c8b08ddc8335792cf7ba4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9549158400, jitterRate=-0.11066532135009766}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:40,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 58fc4e90003c8b08ddc8335792cf7ba4: 2023-07-19 18:14:40,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:40,475 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:40,475 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:40,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 09e4b50ce967513aa4fb462fc4309af0, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 18:14:40,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:40,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:40,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:40,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:40,477 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4., pid=25, masterSystemTime=1689790480446 2023-07-19 18:14:40,477 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=9616664d7fc62c86fccfdcd29b92ba26, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:40,478 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790480477"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790480477"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790480477"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790480477"}]},"ts":"1689790480477"} 2023-07-19 18:14:40,478 INFO [StoreOpener-09e4b50ce967513aa4fb462fc4309af0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:40,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:40,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:40,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:40,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 317e6ec6805082da0b86b5c9e86ab70e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 18:14:40,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:40,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:40,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:40,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:40,482 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=58fc4e90003c8b08ddc8335792cf7ba4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:40,482 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480481"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790480481"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790480481"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790480481"}]},"ts":"1689790480481"} 2023-07-19 18:14:40,483 INFO [StoreOpener-317e6ec6805082da0b86b5c9e86ab70e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:40,483 DEBUG [StoreOpener-09e4b50ce967513aa4fb462fc4309af0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0/f 2023-07-19 18:14:40,484 DEBUG [StoreOpener-09e4b50ce967513aa4fb462fc4309af0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0/f 2023-07-19 18:14:40,486 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=16 2023-07-19 18:14:40,486 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=16, state=SUCCESS; OpenRegionProcedure 9616664d7fc62c86fccfdcd29b92ba26, server=jenkins-hbase4.apache.org,43775,1689790473982 in 185 msec 2023-07-19 18:14:40,486 DEBUG [StoreOpener-317e6ec6805082da0b86b5c9e86ab70e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e/f 2023-07-19 18:14:40,487 DEBUG [StoreOpener-317e6ec6805082da0b86b5c9e86ab70e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e/f 2023-07-19 18:14:40,487 INFO [StoreOpener-317e6ec6805082da0b86b5c9e86ab70e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 317e6ec6805082da0b86b5c9e86ab70e columnFamilyName f 2023-07-19 18:14:40,488 INFO [StoreOpener-317e6ec6805082da0b86b5c9e86ab70e-1] regionserver.HStore(310): Store=317e6ec6805082da0b86b5c9e86ab70e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:40,489 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9616664d7fc62c86fccfdcd29b92ba26, ASSIGN in 358 msec 2023-07-19 18:14:40,490 INFO [StoreOpener-09e4b50ce967513aa4fb462fc4309af0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 09e4b50ce967513aa4fb462fc4309af0 columnFamilyName f 2023-07-19 18:14:40,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:40,491 INFO [StoreOpener-09e4b50ce967513aa4fb462fc4309af0-1] regionserver.HStore(310): Store=09e4b50ce967513aa4fb462fc4309af0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:40,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:40,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:40,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:40,495 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=18 2023-07-19 18:14:40,495 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=18, state=SUCCESS; OpenRegionProcedure 58fc4e90003c8b08ddc8335792cf7ba4, server=jenkins-hbase4.apache.org,40615,1689790473552 in 193 msec 2023-07-19 18:14:40,497 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58fc4e90003c8b08ddc8335792cf7ba4, ASSIGN in 367 msec 2023-07-19 18:14:40,498 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:40,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:40,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:40,512 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 317e6ec6805082da0b86b5c9e86ab70e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11518670080, jitterRate=0.07275974750518799}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:40,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 317e6ec6805082da0b86b5c9e86ab70e: 2023-07-19 18:14:40,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:40,513 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e., pid=23, masterSystemTime=1689790480446 2023-07-19 18:14:40,514 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 09e4b50ce967513aa4fb462fc4309af0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10028656960, jitterRate=-0.0660085380077362}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:40,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 09e4b50ce967513aa4fb462fc4309af0: 2023-07-19 18:14:40,516 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0., pid=21, masterSystemTime=1689790480444 2023-07-19 18:14:40,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:40,518 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:40,519 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:40,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dd4249aba95ef691c99a6dfc932a11e7, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 18:14:40,519 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=317e6ec6805082da0b86b5c9e86ab70e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:40,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:40,519 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480519"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790480519"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790480519"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790480519"}]},"ts":"1689790480519"} 2023-07-19 18:14:40,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:40,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:40,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:40,522 INFO [StoreOpener-dd4249aba95ef691c99a6dfc932a11e7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:40,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:40,523 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=09e4b50ce967513aa4fb462fc4309af0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:40,528 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=17 2023-07-19 18:14:40,528 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=17, state=SUCCESS; OpenRegionProcedure 317e6ec6805082da0b86b5c9e86ab70e, server=jenkins-hbase4.apache.org,40615,1689790473552 in 229 msec 2023-07-19 18:14:40,528 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480522"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790480522"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790480522"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790480522"}]},"ts":"1689790480522"} 2023-07-19 18:14:40,528 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:40,529 DEBUG [StoreOpener-dd4249aba95ef691c99a6dfc932a11e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7/f 2023-07-19 18:14:40,531 DEBUG [StoreOpener-dd4249aba95ef691c99a6dfc932a11e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7/f 2023-07-19 18:14:40,532 INFO [StoreOpener-dd4249aba95ef691c99a6dfc932a11e7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dd4249aba95ef691c99a6dfc932a11e7 columnFamilyName f 2023-07-19 18:14:40,534 INFO [StoreOpener-dd4249aba95ef691c99a6dfc932a11e7-1] regionserver.HStore(310): Store=dd4249aba95ef691c99a6dfc932a11e7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:40,534 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=317e6ec6805082da0b86b5c9e86ab70e, ASSIGN in 400 msec 2023-07-19 18:14:40,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:40,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:40,539 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=19 2023-07-19 18:14:40,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=19, state=SUCCESS; OpenRegionProcedure 09e4b50ce967513aa4fb462fc4309af0, server=jenkins-hbase4.apache.org,43775,1689790473982 in 245 msec 2023-07-19 18:14:40,542 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09e4b50ce967513aa4fb462fc4309af0, ASSIGN in 412 msec 2023-07-19 18:14:40,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:40,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:40,547 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dd4249aba95ef691c99a6dfc932a11e7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11217172640, jitterRate=0.04468061029911041}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:40,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dd4249aba95ef691c99a6dfc932a11e7: 2023-07-19 18:14:40,548 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7., pid=22, masterSystemTime=1689790480446 2023-07-19 18:14:40,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:40,551 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:40,551 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=dd4249aba95ef691c99a6dfc932a11e7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:40,552 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790480551"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790480551"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790480551"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790480551"}]},"ts":"1689790480551"} 2023-07-19 18:14:40,559 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=20 2023-07-19 18:14:40,559 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=20, state=SUCCESS; OpenRegionProcedure dd4249aba95ef691c99a6dfc932a11e7, server=jenkins-hbase4.apache.org,40615,1689790473552 in 262 msec 2023-07-19 18:14:40,562 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=15 2023-07-19 18:14:40,563 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd4249aba95ef691c99a6dfc932a11e7, ASSIGN in 431 msec 2023-07-19 18:14:40,564 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:14:40,564 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790480564"}]},"ts":"1689790480564"} 2023-07-19 18:14:40,567 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-19 18:14:40,570 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:14:40,573 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 798 msec 2023-07-19 18:14:40,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-19 18:14:40,908 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-19 18:14:40,908 DEBUG [Listener at localhost/46039] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-19 18:14:40,909 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:40,917 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-19 18:14:40,917 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:40,918 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-19 18:14:40,918 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:40,923 DEBUG [Listener at localhost/46039] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:14:40,927 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57300, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:14:40,931 DEBUG [Listener at localhost/46039] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:14:40,934 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44604, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:14:40,935 DEBUG [Listener at localhost/46039] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:14:40,938 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33112, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:14:40,940 DEBUG [Listener at localhost/46039] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:14:40,944 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55664, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:14:40,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:40,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:40,959 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:40,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:40,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:40,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:40,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:40,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:40,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:40,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region 9616664d7fc62c86fccfdcd29b92ba26 to RSGroup Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:40,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:40,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:40,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:40,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:40,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:40,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9616664d7fc62c86fccfdcd29b92ba26, REOPEN/MOVE 2023-07-19 18:14:40,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region 317e6ec6805082da0b86b5c9e86ab70e to RSGroup Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:40,982 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9616664d7fc62c86fccfdcd29b92ba26, REOPEN/MOVE 2023-07-19 18:14:40,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:40,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:40,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:40,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:40,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:40,983 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=9616664d7fc62c86fccfdcd29b92ba26, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:40,983 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790480983"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790480983"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790480983"}]},"ts":"1689790480983"} 2023-07-19 18:14:40,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=317e6ec6805082da0b86b5c9e86ab70e, REOPEN/MOVE 2023-07-19 18:14:40,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region 58fc4e90003c8b08ddc8335792cf7ba4 to RSGroup Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:40,984 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=317e6ec6805082da0b86b5c9e86ab70e, REOPEN/MOVE 2023-07-19 18:14:40,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:40,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:40,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:40,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:40,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:40,987 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=317e6ec6805082da0b86b5c9e86ab70e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:40,987 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480986"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790480986"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790480986"}]},"ts":"1689790480986"} 2023-07-19 18:14:40,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58fc4e90003c8b08ddc8335792cf7ba4, REOPEN/MOVE 2023-07-19 18:14:40,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region 09e4b50ce967513aa4fb462fc4309af0 to RSGroup Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:40,988 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58fc4e90003c8b08ddc8335792cf7ba4, REOPEN/MOVE 2023-07-19 18:14:40,988 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure 9616664d7fc62c86fccfdcd29b92ba26, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:40,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:40,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:40,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:40,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:40,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:40,990 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=58fc4e90003c8b08ddc8335792cf7ba4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:40,990 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480990"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790480990"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790480990"}]},"ts":"1689790480990"} 2023-07-19 18:14:40,991 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure 317e6ec6805082da0b86b5c9e86ab70e, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:40,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09e4b50ce967513aa4fb462fc4309af0, REOPEN/MOVE 2023-07-19 18:14:40,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region dd4249aba95ef691c99a6dfc932a11e7 to RSGroup Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:40,993 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09e4b50ce967513aa4fb462fc4309af0, REOPEN/MOVE 2023-07-19 18:14:40,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:40,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:40,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:40,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:40,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:40,996 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=09e4b50ce967513aa4fb462fc4309af0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:40,996 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=28, state=RUNNABLE; CloseRegionProcedure 58fc4e90003c8b08ddc8335792cf7ba4, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:40,996 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790480996"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790480996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790480996"}]},"ts":"1689790480996"} 2023-07-19 18:14:40,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd4249aba95ef691c99a6dfc932a11e7, REOPEN/MOVE 2023-07-19 18:14:40,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_401676244, current retry=0 2023-07-19 18:14:40,998 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd4249aba95ef691c99a6dfc932a11e7, REOPEN/MOVE 2023-07-19 18:14:41,000 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure 09e4b50ce967513aa4fb462fc4309af0, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:41,003 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=dd4249aba95ef691c99a6dfc932a11e7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:41,003 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790481003"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790481003"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790481003"}]},"ts":"1689790481003"} 2023-07-19 18:14:41,006 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=32, state=RUNNABLE; CloseRegionProcedure dd4249aba95ef691c99a6dfc932a11e7, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:41,146 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:41,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 09e4b50ce967513aa4fb462fc4309af0, disabling compactions & flushes 2023-07-19 18:14:41,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:41,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:41,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. after waiting 0 ms 2023-07-19 18:14:41,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:41,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:41,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dd4249aba95ef691c99a6dfc932a11e7, disabling compactions & flushes 2023-07-19 18:14:41,152 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:41,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:41,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. after waiting 0 ms 2023-07-19 18:14:41,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:41,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:41,160 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:41,161 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 09e4b50ce967513aa4fb462fc4309af0: 2023-07-19 18:14:41,161 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 09e4b50ce967513aa4fb462fc4309af0 move to jenkins-hbase4.apache.org,38251,1689790473799 record at close sequenceid=2 2023-07-19 18:14:41,164 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:41,164 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:41,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9616664d7fc62c86fccfdcd29b92ba26, disabling compactions & flushes 2023-07-19 18:14:41,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:41,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:41,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. after waiting 0 ms 2023-07-19 18:14:41,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:41,166 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=09e4b50ce967513aa4fb462fc4309af0, regionState=CLOSED 2023-07-19 18:14:41,166 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790481166"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790481166"}]},"ts":"1689790481166"} 2023-07-19 18:14:41,172 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-19 18:14:41,172 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure 09e4b50ce967513aa4fb462fc4309af0, server=jenkins-hbase4.apache.org,43775,1689790473982 in 168 msec 2023-07-19 18:14:41,173 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09e4b50ce967513aa4fb462fc4309af0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38251,1689790473799; forceNewPlan=false, retain=false 2023-07-19 18:14:41,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:41,184 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:41,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dd4249aba95ef691c99a6dfc932a11e7: 2023-07-19 18:14:41,184 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding dd4249aba95ef691c99a6dfc932a11e7 move to jenkins-hbase4.apache.org,38419,1689790478179 record at close sequenceid=2 2023-07-19 18:14:41,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:41,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:41,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 317e6ec6805082da0b86b5c9e86ab70e, disabling compactions & flushes 2023-07-19 18:14:41,189 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:41,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:41,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. after waiting 0 ms 2023-07-19 18:14:41,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:41,194 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:41,195 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=dd4249aba95ef691c99a6dfc932a11e7, regionState=CLOSED 2023-07-19 18:14:41,196 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790481195"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790481195"}]},"ts":"1689790481195"} 2023-07-19 18:14:41,196 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:41,197 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9616664d7fc62c86fccfdcd29b92ba26: 2023-07-19 18:14:41,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9616664d7fc62c86fccfdcd29b92ba26 move to jenkins-hbase4.apache.org,38251,1689790473799 record at close sequenceid=2 2023-07-19 18:14:41,207 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:41,208 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=9616664d7fc62c86fccfdcd29b92ba26, regionState=CLOSED 2023-07-19 18:14:41,208 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790481208"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790481208"}]},"ts":"1689790481208"} 2023-07-19 18:14:41,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:41,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:41,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 317e6ec6805082da0b86b5c9e86ab70e: 2023-07-19 18:14:41,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 317e6ec6805082da0b86b5c9e86ab70e move to jenkins-hbase4.apache.org,38419,1689790478179 record at close sequenceid=2 2023-07-19 18:14:41,216 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:41,216 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:41,217 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 58fc4e90003c8b08ddc8335792cf7ba4, disabling compactions & flushes 2023-07-19 18:14:41,217 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:41,217 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:41,218 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. after waiting 0 ms 2023-07-19 18:14:41,218 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:41,219 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=32 2023-07-19 18:14:41,219 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=32, state=SUCCESS; CloseRegionProcedure dd4249aba95ef691c99a6dfc932a11e7, server=jenkins-hbase4.apache.org,40615,1689790473552 in 196 msec 2023-07-19 18:14:41,221 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd4249aba95ef691c99a6dfc932a11e7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38419,1689790478179; forceNewPlan=false, retain=false 2023-07-19 18:14:41,222 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-19 18:14:41,223 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure 9616664d7fc62c86fccfdcd29b92ba26, server=jenkins-hbase4.apache.org,43775,1689790473982 in 228 msec 2023-07-19 18:14:41,223 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9616664d7fc62c86fccfdcd29b92ba26, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38251,1689790473799; forceNewPlan=false, retain=false 2023-07-19 18:14:41,227 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=317e6ec6805082da0b86b5c9e86ab70e, regionState=CLOSED 2023-07-19 18:14:41,227 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790481227"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790481227"}]},"ts":"1689790481227"} 2023-07-19 18:14:41,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:41,236 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:41,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 58fc4e90003c8b08ddc8335792cf7ba4: 2023-07-19 18:14:41,236 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 58fc4e90003c8b08ddc8335792cf7ba4 move to jenkins-hbase4.apache.org,38419,1689790478179 record at close sequenceid=2 2023-07-19 18:14:41,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:41,240 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-19 18:14:41,240 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure 317e6ec6805082da0b86b5c9e86ab70e, server=jenkins-hbase4.apache.org,40615,1689790473552 in 240 msec 2023-07-19 18:14:41,241 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=58fc4e90003c8b08ddc8335792cf7ba4, regionState=CLOSED 2023-07-19 18:14:41,241 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790481240"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790481240"}]},"ts":"1689790481240"} 2023-07-19 18:14:41,242 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=317e6ec6805082da0b86b5c9e86ab70e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38419,1689790478179; forceNewPlan=false, retain=false 2023-07-19 18:14:41,248 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=28 2023-07-19 18:14:41,249 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=28, state=SUCCESS; CloseRegionProcedure 58fc4e90003c8b08ddc8335792cf7ba4, server=jenkins-hbase4.apache.org,40615,1689790473552 in 247 msec 2023-07-19 18:14:41,250 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58fc4e90003c8b08ddc8335792cf7ba4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38419,1689790478179; forceNewPlan=false, retain=false 2023-07-19 18:14:41,324 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 18:14:41,326 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=dd4249aba95ef691c99a6dfc932a11e7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:41,327 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=317e6ec6805082da0b86b5c9e86ab70e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:41,327 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790481326"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790481326"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790481326"}]},"ts":"1689790481326"} 2023-07-19 18:14:41,327 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790481327"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790481327"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790481327"}]},"ts":"1689790481327"} 2023-07-19 18:14:41,327 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=58fc4e90003c8b08ddc8335792cf7ba4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:41,327 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=9616664d7fc62c86fccfdcd29b92ba26, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:41,327 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790481327"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790481327"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790481327"}]},"ts":"1689790481327"} 2023-07-19 18:14:41,327 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790481326"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790481326"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790481326"}]},"ts":"1689790481326"} 2023-07-19 18:14:41,327 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=09e4b50ce967513aa4fb462fc4309af0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:41,328 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790481326"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790481326"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790481326"}]},"ts":"1689790481326"} 2023-07-19 18:14:41,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=32, state=RUNNABLE; OpenRegionProcedure dd4249aba95ef691c99a6dfc932a11e7, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:41,332 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=27, state=RUNNABLE; OpenRegionProcedure 317e6ec6805082da0b86b5c9e86ab70e, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:41,336 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=28, state=RUNNABLE; OpenRegionProcedure 58fc4e90003c8b08ddc8335792cf7ba4, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:41,339 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=26, state=RUNNABLE; OpenRegionProcedure 9616664d7fc62c86fccfdcd29b92ba26, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:41,342 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=30, state=RUNNABLE; OpenRegionProcedure 09e4b50ce967513aa4fb462fc4309af0, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:41,487 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:41,487 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:14:41,489 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44610, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:14:41,494 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:41,494 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 58fc4e90003c8b08ddc8335792cf7ba4, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 18:14:41,494 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:41,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:41,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:41,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:41,497 INFO [StoreOpener-58fc4e90003c8b08ddc8335792cf7ba4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:41,498 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:41,498 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 09e4b50ce967513aa4fb462fc4309af0, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 18:14:41,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:41,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:41,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:41,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:41,500 DEBUG [StoreOpener-58fc4e90003c8b08ddc8335792cf7ba4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4/f 2023-07-19 18:14:41,500 DEBUG [StoreOpener-58fc4e90003c8b08ddc8335792cf7ba4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4/f 2023-07-19 18:14:41,501 INFO [StoreOpener-09e4b50ce967513aa4fb462fc4309af0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:41,502 INFO [StoreOpener-58fc4e90003c8b08ddc8335792cf7ba4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 58fc4e90003c8b08ddc8335792cf7ba4 columnFamilyName f 2023-07-19 18:14:41,503 DEBUG [StoreOpener-09e4b50ce967513aa4fb462fc4309af0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0/f 2023-07-19 18:14:41,503 DEBUG [StoreOpener-09e4b50ce967513aa4fb462fc4309af0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0/f 2023-07-19 18:14:41,503 INFO [StoreOpener-58fc4e90003c8b08ddc8335792cf7ba4-1] regionserver.HStore(310): Store=58fc4e90003c8b08ddc8335792cf7ba4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:41,506 INFO [StoreOpener-09e4b50ce967513aa4fb462fc4309af0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 09e4b50ce967513aa4fb462fc4309af0 columnFamilyName f 2023-07-19 18:14:41,507 INFO [StoreOpener-09e4b50ce967513aa4fb462fc4309af0-1] regionserver.HStore(310): Store=09e4b50ce967513aa4fb462fc4309af0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:41,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:41,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:41,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:41,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:41,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:41,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:41,526 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 58fc4e90003c8b08ddc8335792cf7ba4; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11750243680, jitterRate=0.09432671964168549}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:41,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 58fc4e90003c8b08ddc8335792cf7ba4: 2023-07-19 18:14:41,527 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 09e4b50ce967513aa4fb462fc4309af0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10305301760, jitterRate=-0.04024398326873779}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:41,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 09e4b50ce967513aa4fb462fc4309af0: 2023-07-19 18:14:41,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4., pid=38, masterSystemTime=1689790481487 2023-07-19 18:14:41,530 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0., pid=40, masterSystemTime=1689790481493 2023-07-19 18:14:41,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:41,536 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:41,536 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:41,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dd4249aba95ef691c99a6dfc932a11e7, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 18:14:41,537 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=58fc4e90003c8b08ddc8335792cf7ba4, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:41,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:41,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:41,537 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790481536"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790481536"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790481536"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790481536"}]},"ts":"1689790481536"} 2023-07-19 18:14:41,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:41,537 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:41,537 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:41,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:41,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:41,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9616664d7fc62c86fccfdcd29b92ba26, NAME => 'Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 18:14:41,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:41,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:41,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:41,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:41,538 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=09e4b50ce967513aa4fb462fc4309af0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:41,539 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790481538"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790481538"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790481538"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790481538"}]},"ts":"1689790481538"} 2023-07-19 18:14:41,543 INFO [StoreOpener-9616664d7fc62c86fccfdcd29b92ba26-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:41,545 DEBUG [StoreOpener-9616664d7fc62c86fccfdcd29b92ba26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26/f 2023-07-19 18:14:41,545 DEBUG [StoreOpener-9616664d7fc62c86fccfdcd29b92ba26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26/f 2023-07-19 18:14:41,546 INFO [StoreOpener-9616664d7fc62c86fccfdcd29b92ba26-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9616664d7fc62c86fccfdcd29b92ba26 columnFamilyName f 2023-07-19 18:14:41,547 INFO [StoreOpener-9616664d7fc62c86fccfdcd29b92ba26-1] regionserver.HStore(310): Store=9616664d7fc62c86fccfdcd29b92ba26/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:41,547 INFO [StoreOpener-dd4249aba95ef691c99a6dfc932a11e7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:41,549 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=28 2023-07-19 18:14:41,549 DEBUG [StoreOpener-dd4249aba95ef691c99a6dfc932a11e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7/f 2023-07-19 18:14:41,549 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=28, state=SUCCESS; OpenRegionProcedure 58fc4e90003c8b08ddc8335792cf7ba4, server=jenkins-hbase4.apache.org,38419,1689790478179 in 205 msec 2023-07-19 18:14:41,549 DEBUG [StoreOpener-dd4249aba95ef691c99a6dfc932a11e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7/f 2023-07-19 18:14:41,550 INFO [StoreOpener-dd4249aba95ef691c99a6dfc932a11e7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dd4249aba95ef691c99a6dfc932a11e7 columnFamilyName f 2023-07-19 18:14:41,551 INFO [StoreOpener-dd4249aba95ef691c99a6dfc932a11e7-1] regionserver.HStore(310): Store=dd4249aba95ef691c99a6dfc932a11e7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:41,552 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:41,554 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:41,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:41,555 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=30 2023-07-19 18:14:41,555 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=30, state=SUCCESS; OpenRegionProcedure 09e4b50ce967513aa4fb462fc4309af0, server=jenkins-hbase4.apache.org,38251,1689790473799 in 200 msec 2023-07-19 18:14:41,557 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58fc4e90003c8b08ddc8335792cf7ba4, REOPEN/MOVE in 564 msec 2023-07-19 18:14:41,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:41,558 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09e4b50ce967513aa4fb462fc4309af0, REOPEN/MOVE in 565 msec 2023-07-19 18:14:41,561 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:41,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:41,563 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dd4249aba95ef691c99a6dfc932a11e7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11288075680, jitterRate=0.05128397047519684}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:41,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dd4249aba95ef691c99a6dfc932a11e7: 2023-07-19 18:14:41,563 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9616664d7fc62c86fccfdcd29b92ba26; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11640204320, jitterRate=0.08407850563526154}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:41,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9616664d7fc62c86fccfdcd29b92ba26: 2023-07-19 18:14:41,564 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7., pid=36, masterSystemTime=1689790481487 2023-07-19 18:14:41,564 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26., pid=39, masterSystemTime=1689790481493 2023-07-19 18:14:41,567 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:41,567 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:41,567 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:41,567 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 317e6ec6805082da0b86b5c9e86ab70e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 18:14:41,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:41,568 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=dd4249aba95ef691c99a6dfc932a11e7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:41,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:41,568 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790481568"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790481568"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790481568"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790481568"}]},"ts":"1689790481568"} 2023-07-19 18:14:41,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:41,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:41,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:41,568 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:41,575 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=9616664d7fc62c86fccfdcd29b92ba26, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:41,575 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790481575"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790481575"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790481575"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790481575"}]},"ts":"1689790481575"} 2023-07-19 18:14:41,578 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=32 2023-07-19 18:14:41,578 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=32, state=SUCCESS; OpenRegionProcedure dd4249aba95ef691c99a6dfc932a11e7, server=jenkins-hbase4.apache.org,38419,1689790478179 in 240 msec 2023-07-19 18:14:41,583 INFO [StoreOpener-317e6ec6805082da0b86b5c9e86ab70e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:41,587 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd4249aba95ef691c99a6dfc932a11e7, REOPEN/MOVE in 584 msec 2023-07-19 18:14:41,591 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=26 2023-07-19 18:14:41,591 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=26, state=SUCCESS; OpenRegionProcedure 9616664d7fc62c86fccfdcd29b92ba26, server=jenkins-hbase4.apache.org,38251,1689790473799 in 240 msec 2023-07-19 18:14:41,594 DEBUG [StoreOpener-317e6ec6805082da0b86b5c9e86ab70e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e/f 2023-07-19 18:14:41,595 DEBUG [StoreOpener-317e6ec6805082da0b86b5c9e86ab70e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e/f 2023-07-19 18:14:41,595 INFO [StoreOpener-317e6ec6805082da0b86b5c9e86ab70e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 317e6ec6805082da0b86b5c9e86ab70e columnFamilyName f 2023-07-19 18:14:41,596 INFO [StoreOpener-317e6ec6805082da0b86b5c9e86ab70e-1] regionserver.HStore(310): Store=317e6ec6805082da0b86b5c9e86ab70e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:41,597 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9616664d7fc62c86fccfdcd29b92ba26, REOPEN/MOVE in 612 msec 2023-07-19 18:14:41,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:41,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:41,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:41,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 317e6ec6805082da0b86b5c9e86ab70e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11442426400, jitterRate=0.06565900146961212}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:41,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 317e6ec6805082da0b86b5c9e86ab70e: 2023-07-19 18:14:41,611 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e., pid=37, masterSystemTime=1689790481487 2023-07-19 18:14:41,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:41,615 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:41,616 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=317e6ec6805082da0b86b5c9e86ab70e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:41,616 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790481616"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790481616"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790481616"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790481616"}]},"ts":"1689790481616"} 2023-07-19 18:14:41,621 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=27 2023-07-19 18:14:41,621 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=27, state=SUCCESS; OpenRegionProcedure 317e6ec6805082da0b86b5c9e86ab70e, server=jenkins-hbase4.apache.org,38419,1689790478179 in 287 msec 2023-07-19 18:14:41,625 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=317e6ec6805082da0b86b5c9e86ab70e, REOPEN/MOVE in 639 msec 2023-07-19 18:14:41,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-19 18:14:41,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_401676244. 2023-07-19 18:14:41,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:42,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:42,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:42,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:42,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:42,014 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:42,021 INFO [Listener at localhost/46039] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:42,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:42,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:42,040 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790482040"}]},"ts":"1689790482040"} 2023-07-19 18:14:42,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-19 18:14:42,042 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-19 18:14:42,045 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-19 18:14:42,047 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9616664d7fc62c86fccfdcd29b92ba26, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=317e6ec6805082da0b86b5c9e86ab70e, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58fc4e90003c8b08ddc8335792cf7ba4, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09e4b50ce967513aa4fb462fc4309af0, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd4249aba95ef691c99a6dfc932a11e7, UNASSIGN}] 2023-07-19 18:14:42,052 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd4249aba95ef691c99a6dfc932a11e7, UNASSIGN 2023-07-19 18:14:42,052 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09e4b50ce967513aa4fb462fc4309af0, UNASSIGN 2023-07-19 18:14:42,052 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58fc4e90003c8b08ddc8335792cf7ba4, UNASSIGN 2023-07-19 18:14:42,053 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=317e6ec6805082da0b86b5c9e86ab70e, UNASSIGN 2023-07-19 18:14:42,053 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9616664d7fc62c86fccfdcd29b92ba26, UNASSIGN 2023-07-19 18:14:42,053 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=dd4249aba95ef691c99a6dfc932a11e7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:42,053 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=09e4b50ce967513aa4fb462fc4309af0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:42,054 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790482053"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790482053"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790482053"}]},"ts":"1689790482053"} 2023-07-19 18:14:42,054 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790482053"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790482053"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790482053"}]},"ts":"1689790482053"} 2023-07-19 18:14:42,054 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=317e6ec6805082da0b86b5c9e86ab70e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:42,054 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790482054"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790482054"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790482054"}]},"ts":"1689790482054"} 2023-07-19 18:14:42,055 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=9616664d7fc62c86fccfdcd29b92ba26, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:42,055 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=58fc4e90003c8b08ddc8335792cf7ba4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:42,055 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790482055"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790482055"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790482055"}]},"ts":"1689790482055"} 2023-07-19 18:14:42,055 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790482055"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790482055"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790482055"}]},"ts":"1689790482055"} 2023-07-19 18:14:42,056 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=46, state=RUNNABLE; CloseRegionProcedure dd4249aba95ef691c99a6dfc932a11e7, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:42,059 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=45, state=RUNNABLE; CloseRegionProcedure 09e4b50ce967513aa4fb462fc4309af0, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:42,060 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=43, state=RUNNABLE; CloseRegionProcedure 317e6ec6805082da0b86b5c9e86ab70e, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:42,063 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=42, state=RUNNABLE; CloseRegionProcedure 9616664d7fc62c86fccfdcd29b92ba26, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:42,063 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=44, state=RUNNABLE; CloseRegionProcedure 58fc4e90003c8b08ddc8335792cf7ba4, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:42,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-19 18:14:42,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:42,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dd4249aba95ef691c99a6dfc932a11e7, disabling compactions & flushes 2023-07-19 18:14:42,213 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:42,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:42,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. after waiting 0 ms 2023-07-19 18:14:42,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:42,213 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:42,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9616664d7fc62c86fccfdcd29b92ba26, disabling compactions & flushes 2023-07-19 18:14:42,214 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:42,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:42,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. after waiting 0 ms 2023-07-19 18:14:42,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:42,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 18:14:42,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 18:14:42,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7. 2023-07-19 18:14:42,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dd4249aba95ef691c99a6dfc932a11e7: 2023-07-19 18:14:42,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26. 2023-07-19 18:14:42,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9616664d7fc62c86fccfdcd29b92ba26: 2023-07-19 18:14:42,230 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:42,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:42,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 58fc4e90003c8b08ddc8335792cf7ba4, disabling compactions & flushes 2023-07-19 18:14:42,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:42,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:42,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. after waiting 0 ms 2023-07-19 18:14:42,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:42,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 18:14:42,241 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4. 2023-07-19 18:14:42,241 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 58fc4e90003c8b08ddc8335792cf7ba4: 2023-07-19 18:14:42,243 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=dd4249aba95ef691c99a6dfc932a11e7, regionState=CLOSED 2023-07-19 18:14:42,244 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790482243"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790482243"}]},"ts":"1689790482243"} 2023-07-19 18:14:42,244 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:42,244 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:42,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 09e4b50ce967513aa4fb462fc4309af0, disabling compactions & flushes 2023-07-19 18:14:42,245 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:42,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:42,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. after waiting 0 ms 2023-07-19 18:14:42,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:42,247 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=9616664d7fc62c86fccfdcd29b92ba26, regionState=CLOSED 2023-07-19 18:14:42,247 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790482247"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790482247"}]},"ts":"1689790482247"} 2023-07-19 18:14:42,248 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:42,248 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:42,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 317e6ec6805082da0b86b5c9e86ab70e, disabling compactions & flushes 2023-07-19 18:14:42,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:42,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:42,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. after waiting 0 ms 2023-07-19 18:14:42,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:42,255 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=58fc4e90003c8b08ddc8335792cf7ba4, regionState=CLOSED 2023-07-19 18:14:42,257 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790482255"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790482255"}]},"ts":"1689790482255"} 2023-07-19 18:14:42,260 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 18:14:42,264 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=46 2023-07-19 18:14:42,265 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=46, state=SUCCESS; CloseRegionProcedure dd4249aba95ef691c99a6dfc932a11e7, server=jenkins-hbase4.apache.org,38419,1689790478179 in 193 msec 2023-07-19 18:14:42,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 18:14:42,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 18:14:42,266 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=42 2023-07-19 18:14:42,266 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=42, state=SUCCESS; CloseRegionProcedure 9616664d7fc62c86fccfdcd29b92ba26, server=jenkins-hbase4.apache.org,38251,1689790473799 in 192 msec 2023-07-19 18:14:42,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e. 2023-07-19 18:14:42,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0. 2023-07-19 18:14:42,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 317e6ec6805082da0b86b5c9e86ab70e: 2023-07-19 18:14:42,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 09e4b50ce967513aa4fb462fc4309af0: 2023-07-19 18:14:42,269 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd4249aba95ef691c99a6dfc932a11e7, UNASSIGN in 217 msec 2023-07-19 18:14:42,272 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:42,272 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9616664d7fc62c86fccfdcd29b92ba26, UNASSIGN in 219 msec 2023-07-19 18:14:42,273 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=317e6ec6805082da0b86b5c9e86ab70e, regionState=CLOSED 2023-07-19 18:14:42,273 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=44 2023-07-19 18:14:42,274 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790482273"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790482273"}]},"ts":"1689790482273"} 2023-07-19 18:14:42,274 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=44, state=SUCCESS; CloseRegionProcedure 58fc4e90003c8b08ddc8335792cf7ba4, server=jenkins-hbase4.apache.org,38419,1689790478179 in 203 msec 2023-07-19 18:14:42,274 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:42,275 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=09e4b50ce967513aa4fb462fc4309af0, regionState=CLOSED 2023-07-19 18:14:42,276 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790482275"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790482275"}]},"ts":"1689790482275"} 2023-07-19 18:14:42,276 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=58fc4e90003c8b08ddc8335792cf7ba4, UNASSIGN in 227 msec 2023-07-19 18:14:42,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=43 2023-07-19 18:14:42,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=43, state=SUCCESS; CloseRegionProcedure 317e6ec6805082da0b86b5c9e86ab70e, server=jenkins-hbase4.apache.org,38419,1689790478179 in 216 msec 2023-07-19 18:14:42,281 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=317e6ec6805082da0b86b5c9e86ab70e, UNASSIGN in 232 msec 2023-07-19 18:14:42,282 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=45 2023-07-19 18:14:42,282 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=45, state=SUCCESS; CloseRegionProcedure 09e4b50ce967513aa4fb462fc4309af0, server=jenkins-hbase4.apache.org,38251,1689790473799 in 219 msec 2023-07-19 18:14:42,284 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=41 2023-07-19 18:14:42,284 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09e4b50ce967513aa4fb462fc4309af0, UNASSIGN in 235 msec 2023-07-19 18:14:42,285 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790482285"}]},"ts":"1689790482285"} 2023-07-19 18:14:42,287 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-19 18:14:42,289 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-19 18:14:42,293 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 261 msec 2023-07-19 18:14:42,336 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 18:14:42,337 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-19 18:14:42,338 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 18:14:42,338 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-19 18:14:42,338 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 18:14:42,338 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-19 18:14:42,340 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-19 18:14:42,341 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-19 18:14:42,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-19 18:14:42,345 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-19 18:14:42,346 INFO [Listener at localhost/46039] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:42,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:42,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-19 18:14:42,364 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-19 18:14:42,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-19 18:14:42,381 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:42,381 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:42,381 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:42,381 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:42,381 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:42,386 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0/recovered.edits] 2023-07-19 18:14:42,386 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26/recovered.edits] 2023-07-19 18:14:42,386 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7/recovered.edits] 2023-07-19 18:14:42,386 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e/recovered.edits] 2023-07-19 18:14:42,397 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4/recovered.edits] 2023-07-19 18:14:42,414 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26/recovered.edits/7.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26/recovered.edits/7.seqid 2023-07-19 18:14:42,414 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7/recovered.edits/7.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7/recovered.edits/7.seqid 2023-07-19 18:14:42,415 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e/recovered.edits/7.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e/recovered.edits/7.seqid 2023-07-19 18:14:42,415 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4/recovered.edits/7.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4/recovered.edits/7.seqid 2023-07-19 18:14:42,416 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0/recovered.edits/7.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0/recovered.edits/7.seqid 2023-07-19 18:14:42,417 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9616664d7fc62c86fccfdcd29b92ba26 2023-07-19 18:14:42,417 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd4249aba95ef691c99a6dfc932a11e7 2023-07-19 18:14:42,417 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/317e6ec6805082da0b86b5c9e86ab70e 2023-07-19 18:14:42,418 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/58fc4e90003c8b08ddc8335792cf7ba4 2023-07-19 18:14:42,418 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09e4b50ce967513aa4fb462fc4309af0 2023-07-19 18:14:42,418 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 18:14:42,450 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-19 18:14:42,456 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-19 18:14:42,457 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-19 18:14:42,458 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790482457"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:42,458 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790482457"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:42,458 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790482457"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:42,458 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790482457"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:42,458 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790482457"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:42,463 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-19 18:14:42,463 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9616664d7fc62c86fccfdcd29b92ba26, NAME => 'Group_testTableMoveTruncateAndDrop,,1689790479768.9616664d7fc62c86fccfdcd29b92ba26.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 317e6ec6805082da0b86b5c9e86ab70e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689790479768.317e6ec6805082da0b86b5c9e86ab70e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 58fc4e90003c8b08ddc8335792cf7ba4, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790479768.58fc4e90003c8b08ddc8335792cf7ba4.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 09e4b50ce967513aa4fb462fc4309af0, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790479768.09e4b50ce967513aa4fb462fc4309af0.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => dd4249aba95ef691c99a6dfc932a11e7, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689790479768.dd4249aba95ef691c99a6dfc932a11e7.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-19 18:14:42,463 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-19 18:14:42,463 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689790482463"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:42,466 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-19 18:14:42,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-19 18:14:42,475 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:42,475 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:42,475 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:42,475 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:42,475 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:42,475 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5 empty. 2023-07-19 18:14:42,475 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130 empty. 2023-07-19 18:14:42,476 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd empty. 2023-07-19 18:14:42,476 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562 empty. 2023-07-19 18:14:42,476 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:42,476 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d empty. 2023-07-19 18:14:42,476 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:42,476 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:42,477 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:42,477 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:42,477 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 18:14:42,516 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-19 18:14:42,518 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 837c8a77a3f57ea032dc348c07a78eb5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:42,519 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 6ffaeb6a58ebd1d9bd26e3dfee88ce1d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:42,519 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => e2be764d395fd5579eecdfed7812e562, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:42,603 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:42,604 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 837c8a77a3f57ea032dc348c07a78eb5, disabling compactions & flushes 2023-07-19 18:14:42,604 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. 2023-07-19 18:14:42,604 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. 2023-07-19 18:14:42,604 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. after waiting 0 ms 2023-07-19 18:14:42,604 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. 2023-07-19 18:14:42,604 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. 2023-07-19 18:14:42,604 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 837c8a77a3f57ea032dc348c07a78eb5: 2023-07-19 18:14:42,605 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => c8edf6a7445a15ab399f972114a6d7bd, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:42,605 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:42,605 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 6ffaeb6a58ebd1d9bd26e3dfee88ce1d, disabling compactions & flushes 2023-07-19 18:14:42,606 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. 2023-07-19 18:14:42,606 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. 2023-07-19 18:14:42,606 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. after waiting 0 ms 2023-07-19 18:14:42,606 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. 2023-07-19 18:14:42,606 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. 2023-07-19 18:14:42,606 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 6ffaeb6a58ebd1d9bd26e3dfee88ce1d: 2023-07-19 18:14:42,606 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 8554daa8decc8b7120bee218140f6130, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:42,620 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:42,621 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing e2be764d395fd5579eecdfed7812e562, disabling compactions & flushes 2023-07-19 18:14:42,621 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. 2023-07-19 18:14:42,621 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. 2023-07-19 18:14:42,621 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. after waiting 0 ms 2023-07-19 18:14:42,621 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. 2023-07-19 18:14:42,621 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. 2023-07-19 18:14:42,621 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for e2be764d395fd5579eecdfed7812e562: 2023-07-19 18:14:42,657 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:42,657 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing c8edf6a7445a15ab399f972114a6d7bd, disabling compactions & flushes 2023-07-19 18:14:42,657 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. 2023-07-19 18:14:42,657 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. 2023-07-19 18:14:42,657 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. after waiting 0 ms 2023-07-19 18:14:42,657 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. 2023-07-19 18:14:42,658 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. 2023-07-19 18:14:42,658 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for c8edf6a7445a15ab399f972114a6d7bd: 2023-07-19 18:14:42,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-19 18:14:42,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-19 18:14:43,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:43,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 8554daa8decc8b7120bee218140f6130, disabling compactions & flushes 2023-07-19 18:14:43,055 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. 2023-07-19 18:14:43,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. 2023-07-19 18:14:43,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. after waiting 0 ms 2023-07-19 18:14:43,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. 2023-07-19 18:14:43,056 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. 2023-07-19 18:14:43,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 8554daa8decc8b7120bee218140f6130: 2023-07-19 18:14:43,060 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790483060"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790483060"}]},"ts":"1689790483060"} 2023-07-19 18:14:43,060 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790483060"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790483060"}]},"ts":"1689790483060"} 2023-07-19 18:14:43,060 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790483060"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790483060"}]},"ts":"1689790483060"} 2023-07-19 18:14:43,060 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790483060"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790483060"}]},"ts":"1689790483060"} 2023-07-19 18:14:43,060 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790483060"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790483060"}]},"ts":"1689790483060"} 2023-07-19 18:14:43,064 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-19 18:14:43,066 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790483066"}]},"ts":"1689790483066"} 2023-07-19 18:14:43,068 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-19 18:14:43,072 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:43,073 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:43,073 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:43,073 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:43,075 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=837c8a77a3f57ea032dc348c07a78eb5, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2be764d395fd5579eecdfed7812e562, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ffaeb6a58ebd1d9bd26e3dfee88ce1d, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8edf6a7445a15ab399f972114a6d7bd, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8554daa8decc8b7120bee218140f6130, ASSIGN}] 2023-07-19 18:14:43,077 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=837c8a77a3f57ea032dc348c07a78eb5, ASSIGN 2023-07-19 18:14:43,077 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2be764d395fd5579eecdfed7812e562, ASSIGN 2023-07-19 18:14:43,078 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ffaeb6a58ebd1d9bd26e3dfee88ce1d, ASSIGN 2023-07-19 18:14:43,078 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8554daa8decc8b7120bee218140f6130, ASSIGN 2023-07-19 18:14:43,078 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8edf6a7445a15ab399f972114a6d7bd, ASSIGN 2023-07-19 18:14:43,079 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=837c8a77a3f57ea032dc348c07a78eb5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38251,1689790473799; forceNewPlan=false, retain=false 2023-07-19 18:14:43,079 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2be764d395fd5579eecdfed7812e562, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38251,1689790473799; forceNewPlan=false, retain=false 2023-07-19 18:14:43,079 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8edf6a7445a15ab399f972114a6d7bd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38419,1689790478179; forceNewPlan=false, retain=false 2023-07-19 18:14:43,079 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ffaeb6a58ebd1d9bd26e3dfee88ce1d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38419,1689790478179; forceNewPlan=false, retain=false 2023-07-19 18:14:43,079 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8554daa8decc8b7120bee218140f6130, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38251,1689790473799; forceNewPlan=false, retain=false 2023-07-19 18:14:43,229 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 18:14:43,236 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=c8edf6a7445a15ab399f972114a6d7bd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:43,236 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=837c8a77a3f57ea032dc348c07a78eb5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:43,236 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790483236"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790483236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790483236"}]},"ts":"1689790483236"} 2023-07-19 18:14:43,236 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=e2be764d395fd5579eecdfed7812e562, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:43,237 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790483236"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790483236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790483236"}]},"ts":"1689790483236"} 2023-07-19 18:14:43,236 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=6ffaeb6a58ebd1d9bd26e3dfee88ce1d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:43,236 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=8554daa8decc8b7120bee218140f6130, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:43,237 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790483236"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790483236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790483236"}]},"ts":"1689790483236"} 2023-07-19 18:14:43,237 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790483236"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790483236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790483236"}]},"ts":"1689790483236"} 2023-07-19 18:14:43,236 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790483236"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790483236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790483236"}]},"ts":"1689790483236"} 2023-07-19 18:14:43,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=53, state=RUNNABLE; OpenRegionProcedure 837c8a77a3f57ea032dc348c07a78eb5, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:43,242 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=54, state=RUNNABLE; OpenRegionProcedure e2be764d395fd5579eecdfed7812e562, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:43,244 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=55, state=RUNNABLE; OpenRegionProcedure 6ffaeb6a58ebd1d9bd26e3dfee88ce1d, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:43,245 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=57, state=RUNNABLE; OpenRegionProcedure 8554daa8decc8b7120bee218140f6130, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:43,246 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=56, state=RUNNABLE; OpenRegionProcedure c8edf6a7445a15ab399f972114a6d7bd, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:43,405 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. 2023-07-19 18:14:43,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c8edf6a7445a15ab399f972114a6d7bd, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 18:14:43,406 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. 2023-07-19 18:14:43,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:43,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8554daa8decc8b7120bee218140f6130, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 18:14:43,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:43,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:43,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:43,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:43,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:43,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:43,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:43,408 INFO [StoreOpener-c8edf6a7445a15ab399f972114a6d7bd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:43,411 INFO [StoreOpener-8554daa8decc8b7120bee218140f6130-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:43,412 DEBUG [StoreOpener-c8edf6a7445a15ab399f972114a6d7bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd/f 2023-07-19 18:14:43,412 DEBUG [StoreOpener-c8edf6a7445a15ab399f972114a6d7bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd/f 2023-07-19 18:14:43,413 DEBUG [StoreOpener-8554daa8decc8b7120bee218140f6130-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130/f 2023-07-19 18:14:43,413 DEBUG [StoreOpener-8554daa8decc8b7120bee218140f6130-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130/f 2023-07-19 18:14:43,413 INFO [StoreOpener-c8edf6a7445a15ab399f972114a6d7bd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c8edf6a7445a15ab399f972114a6d7bd columnFamilyName f 2023-07-19 18:14:43,413 INFO [StoreOpener-8554daa8decc8b7120bee218140f6130-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8554daa8decc8b7120bee218140f6130 columnFamilyName f 2023-07-19 18:14:43,414 INFO [StoreOpener-8554daa8decc8b7120bee218140f6130-1] regionserver.HStore(310): Store=8554daa8decc8b7120bee218140f6130/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:43,414 INFO [StoreOpener-c8edf6a7445a15ab399f972114a6d7bd-1] regionserver.HStore(310): Store=c8edf6a7445a15ab399f972114a6d7bd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:43,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:43,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:43,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:43,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:43,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:43,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:43,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:43,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:43,437 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c8edf6a7445a15ab399f972114a6d7bd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11085900800, jitterRate=0.0324549674987793}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:43,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c8edf6a7445a15ab399f972114a6d7bd: 2023-07-19 18:14:43,437 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8554daa8decc8b7120bee218140f6130; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10579963520, jitterRate=-0.014664113521575928}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:43,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8554daa8decc8b7120bee218140f6130: 2023-07-19 18:14:43,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd., pid=62, masterSystemTime=1689790483399 2023-07-19 18:14:43,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130., pid=61, masterSystemTime=1689790483399 2023-07-19 18:14:43,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. 2023-07-19 18:14:43,444 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. 2023-07-19 18:14:43,444 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. 2023-07-19 18:14:43,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6ffaeb6a58ebd1d9bd26e3dfee88ce1d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 18:14:43,445 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=c8edf6a7445a15ab399f972114a6d7bd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:43,445 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:43,445 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:43,445 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790483444"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790483444"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790483444"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790483444"}]},"ts":"1689790483444"} 2023-07-19 18:14:43,445 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:43,445 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:43,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. 2023-07-19 18:14:43,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. 2023-07-19 18:14:43,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. 2023-07-19 18:14:43,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 837c8a77a3f57ea032dc348c07a78eb5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 18:14:43,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:43,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:43,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:43,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:43,450 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=8554daa8decc8b7120bee218140f6130, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:43,450 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790483450"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790483450"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790483450"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790483450"}]},"ts":"1689790483450"} 2023-07-19 18:14:43,452 INFO [StoreOpener-837c8a77a3f57ea032dc348c07a78eb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:43,452 INFO [StoreOpener-6ffaeb6a58ebd1d9bd26e3dfee88ce1d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:43,454 DEBUG [StoreOpener-6ffaeb6a58ebd1d9bd26e3dfee88ce1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d/f 2023-07-19 18:14:43,454 DEBUG [StoreOpener-6ffaeb6a58ebd1d9bd26e3dfee88ce1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d/f 2023-07-19 18:14:43,454 DEBUG [StoreOpener-837c8a77a3f57ea032dc348c07a78eb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5/f 2023-07-19 18:14:43,455 DEBUG [StoreOpener-837c8a77a3f57ea032dc348c07a78eb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5/f 2023-07-19 18:14:43,455 INFO [StoreOpener-837c8a77a3f57ea032dc348c07a78eb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 837c8a77a3f57ea032dc348c07a78eb5 columnFamilyName f 2023-07-19 18:14:43,457 INFO [StoreOpener-6ffaeb6a58ebd1d9bd26e3dfee88ce1d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6ffaeb6a58ebd1d9bd26e3dfee88ce1d columnFamilyName f 2023-07-19 18:14:43,457 INFO [StoreOpener-837c8a77a3f57ea032dc348c07a78eb5-1] regionserver.HStore(310): Store=837c8a77a3f57ea032dc348c07a78eb5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:43,457 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=56 2023-07-19 18:14:43,458 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=56, state=SUCCESS; OpenRegionProcedure c8edf6a7445a15ab399f972114a6d7bd, server=jenkins-hbase4.apache.org,38419,1689790478179 in 202 msec 2023-07-19 18:14:43,458 INFO [StoreOpener-6ffaeb6a58ebd1d9bd26e3dfee88ce1d-1] regionserver.HStore(310): Store=6ffaeb6a58ebd1d9bd26e3dfee88ce1d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:43,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:43,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:43,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:43,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:43,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:43,470 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8edf6a7445a15ab399f972114a6d7bd, ASSIGN in 383 msec 2023-07-19 18:14:43,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:43,471 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=57 2023-07-19 18:14:43,471 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=57, state=SUCCESS; OpenRegionProcedure 8554daa8decc8b7120bee218140f6130, server=jenkins-hbase4.apache.org,38251,1689790473799 in 213 msec 2023-07-19 18:14:43,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-19 18:14:43,476 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8554daa8decc8b7120bee218140f6130, ASSIGN in 396 msec 2023-07-19 18:14:43,498 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:43,499 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6ffaeb6a58ebd1d9bd26e3dfee88ce1d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9602731520, jitterRate=-0.10567593574523926}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:43,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:43,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6ffaeb6a58ebd1d9bd26e3dfee88ce1d: 2023-07-19 18:14:43,500 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 837c8a77a3f57ea032dc348c07a78eb5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11144208160, jitterRate=0.037885263562202454}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:43,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 837c8a77a3f57ea032dc348c07a78eb5: 2023-07-19 18:14:43,500 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d., pid=60, masterSystemTime=1689790483399 2023-07-19 18:14:43,501 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5., pid=58, masterSystemTime=1689790483399 2023-07-19 18:14:43,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. 2023-07-19 18:14:43,503 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. 2023-07-19 18:14:43,505 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=6ffaeb6a58ebd1d9bd26e3dfee88ce1d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:43,505 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790483505"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790483505"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790483505"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790483505"}]},"ts":"1689790483505"} 2023-07-19 18:14:43,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. 2023-07-19 18:14:43,506 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. 2023-07-19 18:14:43,506 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. 2023-07-19 18:14:43,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e2be764d395fd5579eecdfed7812e562, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 18:14:43,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:43,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:43,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:43,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:43,508 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=837c8a77a3f57ea032dc348c07a78eb5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:43,510 INFO [StoreOpener-e2be764d395fd5579eecdfed7812e562-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:43,511 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790483508"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790483508"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790483508"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790483508"}]},"ts":"1689790483508"} 2023-07-19 18:14:43,515 DEBUG [StoreOpener-e2be764d395fd5579eecdfed7812e562-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562/f 2023-07-19 18:14:43,515 DEBUG [StoreOpener-e2be764d395fd5579eecdfed7812e562-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562/f 2023-07-19 18:14:43,516 INFO [StoreOpener-e2be764d395fd5579eecdfed7812e562-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e2be764d395fd5579eecdfed7812e562 columnFamilyName f 2023-07-19 18:14:43,517 INFO [StoreOpener-e2be764d395fd5579eecdfed7812e562-1] regionserver.HStore(310): Store=e2be764d395fd5579eecdfed7812e562/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:43,517 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=55 2023-07-19 18:14:43,517 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; OpenRegionProcedure 6ffaeb6a58ebd1d9bd26e3dfee88ce1d, server=jenkins-hbase4.apache.org,38419,1689790478179 in 263 msec 2023-07-19 18:14:43,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:43,518 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=53 2023-07-19 18:14:43,519 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=53, state=SUCCESS; OpenRegionProcedure 837c8a77a3f57ea032dc348c07a78eb5, server=jenkins-hbase4.apache.org,38251,1689790473799 in 272 msec 2023-07-19 18:14:43,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:43,520 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ffaeb6a58ebd1d9bd26e3dfee88ce1d, ASSIGN in 442 msec 2023-07-19 18:14:43,520 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=837c8a77a3f57ea032dc348c07a78eb5, ASSIGN in 446 msec 2023-07-19 18:14:43,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:43,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:43,525 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e2be764d395fd5579eecdfed7812e562; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11702704160, jitterRate=0.08989925682544708}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:43,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e2be764d395fd5579eecdfed7812e562: 2023-07-19 18:14:43,526 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562., pid=59, masterSystemTime=1689790483399 2023-07-19 18:14:43,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. 2023-07-19 18:14:43,528 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. 2023-07-19 18:14:43,529 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=e2be764d395fd5579eecdfed7812e562, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:43,529 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790483529"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790483529"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790483529"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790483529"}]},"ts":"1689790483529"} 2023-07-19 18:14:43,534 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=54 2023-07-19 18:14:43,534 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=54, state=SUCCESS; OpenRegionProcedure e2be764d395fd5579eecdfed7812e562, server=jenkins-hbase4.apache.org,38251,1689790473799 in 289 msec 2023-07-19 18:14:43,536 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=52 2023-07-19 18:14:43,537 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2be764d395fd5579eecdfed7812e562, ASSIGN in 459 msec 2023-07-19 18:14:43,537 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790483537"}]},"ts":"1689790483537"} 2023-07-19 18:14:43,539 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-19 18:14:43,541 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-19 18:14:43,546 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.1870 sec 2023-07-19 18:14:44,465 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-19 18:14:44,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-19 18:14:44,480 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-19 18:14:44,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:44,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:44,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:44,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:44,485 INFO [Listener at localhost/46039] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:44,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:44,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:44,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-19 18:14:44,492 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790484492"}]},"ts":"1689790484492"} 2023-07-19 18:14:44,494 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-19 18:14:44,496 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-19 18:14:44,497 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=837c8a77a3f57ea032dc348c07a78eb5, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2be764d395fd5579eecdfed7812e562, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ffaeb6a58ebd1d9bd26e3dfee88ce1d, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8edf6a7445a15ab399f972114a6d7bd, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8554daa8decc8b7120bee218140f6130, UNASSIGN}] 2023-07-19 18:14:44,499 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2be764d395fd5579eecdfed7812e562, UNASSIGN 2023-07-19 18:14:44,500 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=837c8a77a3f57ea032dc348c07a78eb5, UNASSIGN 2023-07-19 18:14:44,500 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8edf6a7445a15ab399f972114a6d7bd, UNASSIGN 2023-07-19 18:14:44,506 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8554daa8decc8b7120bee218140f6130, UNASSIGN 2023-07-19 18:14:44,507 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ffaeb6a58ebd1d9bd26e3dfee88ce1d, UNASSIGN 2023-07-19 18:14:44,509 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=e2be764d395fd5579eecdfed7812e562, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:44,509 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790484508"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790484508"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790484508"}]},"ts":"1689790484508"} 2023-07-19 18:14:44,509 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=837c8a77a3f57ea032dc348c07a78eb5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:44,509 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790484509"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790484509"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790484509"}]},"ts":"1689790484509"} 2023-07-19 18:14:44,510 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=c8edf6a7445a15ab399f972114a6d7bd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:44,510 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790484510"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790484510"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790484510"}]},"ts":"1689790484510"} 2023-07-19 18:14:44,512 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=8554daa8decc8b7120bee218140f6130, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:44,512 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=6ffaeb6a58ebd1d9bd26e3dfee88ce1d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:44,512 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790484512"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790484512"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790484512"}]},"ts":"1689790484512"} 2023-07-19 18:14:44,512 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790484512"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790484512"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790484512"}]},"ts":"1689790484512"} 2023-07-19 18:14:44,513 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=65, state=RUNNABLE; CloseRegionProcedure e2be764d395fd5579eecdfed7812e562, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:44,514 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=64, state=RUNNABLE; CloseRegionProcedure 837c8a77a3f57ea032dc348c07a78eb5, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:44,517 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=67, state=RUNNABLE; CloseRegionProcedure c8edf6a7445a15ab399f972114a6d7bd, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:44,518 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=68, state=RUNNABLE; CloseRegionProcedure 8554daa8decc8b7120bee218140f6130, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:44,525 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=66, state=RUNNABLE; CloseRegionProcedure 6ffaeb6a58ebd1d9bd26e3dfee88ce1d, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:44,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-19 18:14:44,671 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:44,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e2be764d395fd5579eecdfed7812e562, disabling compactions & flushes 2023-07-19 18:14:44,672 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. 2023-07-19 18:14:44,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. 2023-07-19 18:14:44,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. after waiting 0 ms 2023-07-19 18:14:44,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. 2023-07-19 18:14:44,677 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:44,679 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c8edf6a7445a15ab399f972114a6d7bd, disabling compactions & flushes 2023-07-19 18:14:44,679 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. 2023-07-19 18:14:44,679 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. 2023-07-19 18:14:44,679 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. after waiting 0 ms 2023-07-19 18:14:44,679 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. 2023-07-19 18:14:44,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:44,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562. 2023-07-19 18:14:44,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e2be764d395fd5579eecdfed7812e562: 2023-07-19 18:14:44,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:44,695 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:44,695 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:44,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8554daa8decc8b7120bee218140f6130, disabling compactions & flushes 2023-07-19 18:14:44,697 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. 2023-07-19 18:14:44,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. 2023-07-19 18:14:44,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. after waiting 0 ms 2023-07-19 18:14:44,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. 2023-07-19 18:14:44,697 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd. 2023-07-19 18:14:44,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c8edf6a7445a15ab399f972114a6d7bd: 2023-07-19 18:14:44,697 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=e2be764d395fd5579eecdfed7812e562, regionState=CLOSED 2023-07-19 18:14:44,698 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790484697"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790484697"}]},"ts":"1689790484697"} 2023-07-19 18:14:44,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:44,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:44,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6ffaeb6a58ebd1d9bd26e3dfee88ce1d, disabling compactions & flushes 2023-07-19 18:14:44,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. 2023-07-19 18:14:44,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. 2023-07-19 18:14:44,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. after waiting 0 ms 2023-07-19 18:14:44,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. 2023-07-19 18:14:44,703 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=c8edf6a7445a15ab399f972114a6d7bd, regionState=CLOSED 2023-07-19 18:14:44,703 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790484703"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790484703"}]},"ts":"1689790484703"} 2023-07-19 18:14:44,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:44,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:44,710 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130. 2023-07-19 18:14:44,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8554daa8decc8b7120bee218140f6130: 2023-07-19 18:14:44,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d. 2023-07-19 18:14:44,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6ffaeb6a58ebd1d9bd26e3dfee88ce1d: 2023-07-19 18:14:44,716 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=65 2023-07-19 18:14:44,716 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=65, state=SUCCESS; CloseRegionProcedure e2be764d395fd5579eecdfed7812e562, server=jenkins-hbase4.apache.org,38251,1689790473799 in 188 msec 2023-07-19 18:14:44,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:44,717 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=6ffaeb6a58ebd1d9bd26e3dfee88ce1d, regionState=CLOSED 2023-07-19 18:14:44,717 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689790484717"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790484717"}]},"ts":"1689790484717"} 2023-07-19 18:14:44,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:44,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:44,719 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 837c8a77a3f57ea032dc348c07a78eb5, disabling compactions & flushes 2023-07-19 18:14:44,719 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. 2023-07-19 18:14:44,719 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. 2023-07-19 18:14:44,719 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. after waiting 0 ms 2023-07-19 18:14:44,719 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. 2023-07-19 18:14:44,719 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=67 2023-07-19 18:14:44,719 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=8554daa8decc8b7120bee218140f6130, regionState=CLOSED 2023-07-19 18:14:44,719 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=67, state=SUCCESS; CloseRegionProcedure c8edf6a7445a15ab399f972114a6d7bd, server=jenkins-hbase4.apache.org,38419,1689790478179 in 188 msec 2023-07-19 18:14:44,719 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790484719"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790484719"}]},"ts":"1689790484719"} 2023-07-19 18:14:44,720 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2be764d395fd5579eecdfed7812e562, UNASSIGN in 219 msec 2023-07-19 18:14:44,722 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8edf6a7445a15ab399f972114a6d7bd, UNASSIGN in 222 msec 2023-07-19 18:14:44,723 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=66 2023-07-19 18:14:44,724 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=66, state=SUCCESS; CloseRegionProcedure 6ffaeb6a58ebd1d9bd26e3dfee88ce1d, server=jenkins-hbase4.apache.org,38419,1689790478179 in 196 msec 2023-07-19 18:14:44,724 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=68 2023-07-19 18:14:44,724 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=68, state=SUCCESS; CloseRegionProcedure 8554daa8decc8b7120bee218140f6130, server=jenkins-hbase4.apache.org,38251,1689790473799 in 204 msec 2023-07-19 18:14:44,725 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6ffaeb6a58ebd1d9bd26e3dfee88ce1d, UNASSIGN in 227 msec 2023-07-19 18:14:44,726 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8554daa8decc8b7120bee218140f6130, UNASSIGN in 227 msec 2023-07-19 18:14:44,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:44,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5. 2023-07-19 18:14:44,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 837c8a77a3f57ea032dc348c07a78eb5: 2023-07-19 18:14:44,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:44,735 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=837c8a77a3f57ea032dc348c07a78eb5, regionState=CLOSED 2023-07-19 18:14:44,735 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689790484735"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790484735"}]},"ts":"1689790484735"} 2023-07-19 18:14:44,742 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=64 2023-07-19 18:14:44,742 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=64, state=SUCCESS; CloseRegionProcedure 837c8a77a3f57ea032dc348c07a78eb5, server=jenkins-hbase4.apache.org,38251,1689790473799 in 226 msec 2023-07-19 18:14:44,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=63 2023-07-19 18:14:44,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=837c8a77a3f57ea032dc348c07a78eb5, UNASSIGN in 245 msec 2023-07-19 18:14:44,747 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790484747"}]},"ts":"1689790484747"} 2023-07-19 18:14:44,750 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-19 18:14:44,752 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-19 18:14:44,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 267 msec 2023-07-19 18:14:44,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-19 18:14:44,796 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-19 18:14:44,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:44,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:44,815 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:44,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_401676244' 2023-07-19 18:14:44,818 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:44,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:44,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:44,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:44,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:44,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-19 18:14:44,833 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:44,833 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:44,833 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:44,833 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:44,833 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:44,837 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5/recovered.edits] 2023-07-19 18:14:44,838 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562/recovered.edits] 2023-07-19 18:14:44,838 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd/recovered.edits] 2023-07-19 18:14:44,839 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d/recovered.edits] 2023-07-19 18:14:44,839 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130/recovered.edits] 2023-07-19 18:14:44,851 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562/recovered.edits/4.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562/recovered.edits/4.seqid 2023-07-19 18:14:44,851 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d/recovered.edits/4.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d/recovered.edits/4.seqid 2023-07-19 18:14:44,851 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5/recovered.edits/4.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5/recovered.edits/4.seqid 2023-07-19 18:14:44,852 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130/recovered.edits/4.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130/recovered.edits/4.seqid 2023-07-19 18:14:44,852 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6ffaeb6a58ebd1d9bd26e3dfee88ce1d 2023-07-19 18:14:44,852 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2be764d395fd5579eecdfed7812e562 2023-07-19 18:14:44,852 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/837c8a77a3f57ea032dc348c07a78eb5 2023-07-19 18:14:44,853 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8554daa8decc8b7120bee218140f6130 2023-07-19 18:14:44,854 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd/recovered.edits/4.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd/recovered.edits/4.seqid 2023-07-19 18:14:44,854 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8edf6a7445a15ab399f972114a6d7bd 2023-07-19 18:14:44,854 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 18:14:44,857 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:44,865 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-19 18:14:44,868 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-19 18:14:44,869 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:44,870 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-19 18:14:44,870 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790484870"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:44,870 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790484870"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:44,870 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790484870"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:44,870 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790484870"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:44,870 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790484870"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:44,875 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-19 18:14:44,875 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 837c8a77a3f57ea032dc348c07a78eb5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689790482420.837c8a77a3f57ea032dc348c07a78eb5.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => e2be764d395fd5579eecdfed7812e562, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689790482420.e2be764d395fd5579eecdfed7812e562.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 6ffaeb6a58ebd1d9bd26e3dfee88ce1d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689790482420.6ffaeb6a58ebd1d9bd26e3dfee88ce1d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => c8edf6a7445a15ab399f972114a6d7bd, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689790482420.c8edf6a7445a15ab399f972114a6d7bd.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 8554daa8decc8b7120bee218140f6130, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689790482420.8554daa8decc8b7120bee218140f6130.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-19 18:14:44,875 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-19 18:14:44,875 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689790484875"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:44,877 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-19 18:14:44,881 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 18:14:44,882 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 77 msec 2023-07-19 18:14:44,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-19 18:14:44,936 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-19 18:14:44,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:44,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:44,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:44,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:44,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:44,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:44,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:44,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:44,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:44,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:14:44,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:44,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:44,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 18:14:44,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:44,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:44,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:44,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:44,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:38251] to rsgroup default 2023-07-19 18:14:44,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:44,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:44,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:44,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_401676244, current retry=0 2023-07-19 18:14:44,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799, jenkins-hbase4.apache.org,38419,1689790478179] are moved back to Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:44,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_401676244 => default 2023-07-19 18:14:44,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:44,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_401676244 2023-07-19 18:14:44,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:44,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:14:45,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:45,009 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:14:45,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:45,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:45,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:45,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:45,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:45,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:14:45,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:45,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 148 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791685026, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:14:45,028 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:14:45,030 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:45,032 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,032 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,033 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:14:45,034 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:45,034 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:45,070 INFO [Listener at localhost/46039] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=495 (was 425) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1208631635_17 at /127.0.0.1:43270 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38419 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:41243 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-387814466_17 at /127.0.0.1:51628 [Receiving block BP-1139031693-172.31.14.131-1689790468337:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38419 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:41243 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-387814466_17 at /127.0.0.1:34252 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61716@0x714d7cbd sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1502525435.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-387814466_17 at /127.0.0.1:51620 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1139031693-172.31.14.131-1689790468337:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38419 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38419 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp946031351-637 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/658776246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1139031693-172.31.14.131-1689790468337:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:38419-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:38419Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-387814466_17 at /127.0.0.1:34238 [Receiving block BP-1139031693-172.31.14.131-1689790468337:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475-prefix:jenkins-hbase4.apache.org,38419,1689790478179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-387814466_17 at /127.0.0.1:40056 [Receiving block BP-1139031693-172.31.14.131-1689790468337:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-638-acceptor-0@7d677e2f-ServerConnector@35efd609{HTTP/1.1, (http/1.1)}{0.0.0.0:46573} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61716@0x714d7cbd-SendThread(127.0.0.1:61716) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp946031351-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:38419 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1139031693-172.31.14.131-1689790468337:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38419 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38419 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3cbddc65-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61716@0x714d7cbd-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38419 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38419 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38419 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38419 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=786 (was 683) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=538 (was 479) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=3097 (was 3611) 2023-07-19 18:14:45,107 INFO [Listener at localhost/46039] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=495, OpenFileDescriptor=786, MaxFileDescriptor=60000, SystemLoadAverage=538, ProcessCount=173, AvailableMemoryMB=3094 2023-07-19 18:14:45,109 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-19 18:14:45,116 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,117 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,119 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:45,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:45,119 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:45,122 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:45,122 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:45,123 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:14:45,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:45,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:14:45,131 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:45,138 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:14:45,140 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:45,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:45,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:45,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:45,150 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:45,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:14:45,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:45,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 176 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791685166, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:14:45,167 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:14:45,169 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:45,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,172 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:14:45,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:45,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:45,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-19 18:14:45,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:45,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:51588 deadline: 1689791685175, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-19 18:14:45,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-19 18:14:45,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:45,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:51588 deadline: 1689791685178, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-19 18:14:45,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-19 18:14:45,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:45,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 186 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:51588 deadline: 1689791685181, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-19 18:14:45,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-19 18:14:45,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-19 18:14:45,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:45,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:45,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:45,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:45,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:45,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:45,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:45,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:45,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:45,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-19 18:14:45,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:45,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:45,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 18:14:45,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:45,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:45,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:45,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:45,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:45,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:45,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:14:45,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:45,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:14:45,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:45,249 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:14:45,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:45,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:45,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:45,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:45,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:45,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:14:45,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:45,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 220 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791685278, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:14:45,279 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:14:45,281 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:45,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,282 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:14:45,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:45,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:45,301 INFO [Listener at localhost/46039] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=498 (was 495) Potentially hanging thread: hconnection-0x22934466-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=786 (was 786), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=538 (was 538), ProcessCount=173 (was 173), AvailableMemoryMB=3076 (was 3094) 2023-07-19 18:14:45,322 INFO [Listener at localhost/46039] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=498, OpenFileDescriptor=786, MaxFileDescriptor=60000, SystemLoadAverage=538, ProcessCount=173, AvailableMemoryMB=3072 2023-07-19 18:14:45,323 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-19 18:14:45,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:45,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:45,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:45,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:45,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:45,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:14:45,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:45,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:14:45,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:45,343 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:14:45,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:45,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:45,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:45,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:45,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:45,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:14:45,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:45,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 248 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791685363, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:14:45,364 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:14:45,366 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:45,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,368 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:14:45,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:45,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:45,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:45,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:45,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-19 18:14:45,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:45,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 18:14:45,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:45,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:45,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:45,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:45,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:45,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:38251] to rsgroup bar 2023-07-19 18:14:45,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:45,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 18:14:45,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:45,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:45,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(238): Moving server region 9ea4dee563e7f0f7a6c584dc1c5c929d, which do not belong to RSGroup bar 2023-07-19 18:14:45,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=9ea4dee563e7f0f7a6c584dc1c5c929d, REOPEN/MOVE 2023-07-19 18:14:45,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-19 18:14:45,395 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=9ea4dee563e7f0f7a6c584dc1c5c929d, REOPEN/MOVE 2023-07-19 18:14:45,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-19 18:14:45,396 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=9ea4dee563e7f0f7a6c584dc1c5c929d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:45,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-19 18:14:45,397 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-19 18:14:45,397 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790485396"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790485396"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790485396"}]},"ts":"1689790485396"} 2023-07-19 18:14:45,398 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40615,1689790473552, state=CLOSING 2023-07-19 18:14:45,399 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 9ea4dee563e7f0f7a6c584dc1c5c929d, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:45,404 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 18:14:45,404 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=76, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:45,404 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 18:14:45,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:45,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9ea4dee563e7f0f7a6c584dc1c5c929d, disabling compactions & flushes 2023-07-19 18:14:45,553 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-19 18:14:45,553 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:45,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:45,554 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 18:14:45,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. after waiting 0 ms 2023-07-19 18:14:45,555 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 18:14:45,555 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 18:14:45,555 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 18:14:45,555 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 18:14:45,555 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:45,555 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=41.95 KB heapSize=64.95 KB 2023-07-19 18:14:45,555 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 9ea4dee563e7f0f7a6c584dc1c5c929d 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-19 18:14:45,588 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/.tmp/info/953e9352d83a4968be54e843c790298b 2023-07-19 18:14:45,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/.tmp/info/953e9352d83a4968be54e843c790298b as hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/info/953e9352d83a4968be54e843c790298b 2023-07-19 18:14:45,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/info/953e9352d83a4968be54e843c790298b, entries=2, sequenceid=6, filesize=4.8 K 2023-07-19 18:14:45,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 9ea4dee563e7f0f7a6c584dc1c5c929d in 58ms, sequenceid=6, compaction requested=false 2023-07-19 18:14:45,620 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.89 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/info/5864b7ad5b63402b9991945a3e439421 2023-07-19 18:14:45,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-19 18:14:45,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:45,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9ea4dee563e7f0f7a6c584dc1c5c929d: 2023-07-19 18:14:45,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9ea4dee563e7f0f7a6c584dc1c5c929d move to jenkins-hbase4.apache.org,43775,1689790473982 record at close sequenceid=6 2023-07-19 18:14:45,631 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5864b7ad5b63402b9991945a3e439421 2023-07-19 18:14:45,637 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 9ea4dee563e7f0f7a6c584dc1c5c929d, server=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:45,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:45,667 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/rep_barrier/9feb493ff59c4c06b1c6e02b388856eb 2023-07-19 18:14:45,674 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9feb493ff59c4c06b1c6e02b388856eb 2023-07-19 18:14:45,700 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/table/991e47d6fd134ad8a0ac54b0e1086f89 2023-07-19 18:14:45,709 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 991e47d6fd134ad8a0ac54b0e1086f89 2023-07-19 18:14:45,710 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/info/5864b7ad5b63402b9991945a3e439421 as hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/info/5864b7ad5b63402b9991945a3e439421 2023-07-19 18:14:45,717 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5864b7ad5b63402b9991945a3e439421 2023-07-19 18:14:45,717 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/info/5864b7ad5b63402b9991945a3e439421, entries=46, sequenceid=95, filesize=10.2 K 2023-07-19 18:14:45,718 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/rep_barrier/9feb493ff59c4c06b1c6e02b388856eb as hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/rep_barrier/9feb493ff59c4c06b1c6e02b388856eb 2023-07-19 18:14:45,726 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9feb493ff59c4c06b1c6e02b388856eb 2023-07-19 18:14:45,727 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/rep_barrier/9feb493ff59c4c06b1c6e02b388856eb, entries=10, sequenceid=95, filesize=6.1 K 2023-07-19 18:14:45,728 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/table/991e47d6fd134ad8a0ac54b0e1086f89 as hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/table/991e47d6fd134ad8a0ac54b0e1086f89 2023-07-19 18:14:45,736 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 991e47d6fd134ad8a0ac54b0e1086f89 2023-07-19 18:14:45,736 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/table/991e47d6fd134ad8a0ac54b0e1086f89, entries=15, sequenceid=95, filesize=6.2 K 2023-07-19 18:14:45,737 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~41.95 KB/42961, heapSize ~64.91 KB/66464, currentSize=0 B/0 for 1588230740 in 182ms, sequenceid=95, compaction requested=false 2023-07-19 18:14:45,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-19 18:14:45,750 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 18:14:45,750 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 18:14:45,750 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 18:14:45,750 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,43775,1689790473982 record at close sequenceid=95 2023-07-19 18:14:45,752 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-19 18:14:45,753 WARN [PEWorker-4] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-19 18:14:45,755 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=76 2023-07-19 18:14:45,755 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=76, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40615,1689790473552 in 349 msec 2023-07-19 18:14:45,756 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:14:45,906 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43775,1689790473982, state=OPENING 2023-07-19 18:14:45,912 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=76, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:45,912 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 18:14:45,913 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 18:14:46,072 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 18:14:46,072 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:14:46,074 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43775%2C1689790473982.meta, suffix=.meta, logDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,43775,1689790473982, archiveDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs, maxLogs=32 2023-07-19 18:14:46,096 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK] 2023-07-19 18:14:46,098 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK] 2023-07-19 18:14:46,098 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK] 2023-07-19 18:14:46,102 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,43775,1689790473982/jenkins-hbase4.apache.org%2C43775%2C1689790473982.meta.1689790486075.meta 2023-07-19 18:14:46,102 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42841,DS-214dd5b4-56cd-4179-a190-89691ecc0162,DISK], DatanodeInfoWithStorage[127.0.0.1:44697,DS-5e5f567a-0cfa-476c-aaf1-daa8c87d818c,DISK], DatanodeInfoWithStorage[127.0.0.1:42045,DS-7e5ff312-bee9-418f-8326-eda7dc88166d,DISK]] 2023-07-19 18:14:46,102 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:46,102 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 18:14:46,103 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 18:14:46,103 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 18:14:46,103 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 18:14:46,103 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:46,103 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 18:14:46,103 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 18:14:46,104 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 18:14:46,106 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/info 2023-07-19 18:14:46,106 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/info 2023-07-19 18:14:46,106 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 18:14:46,119 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5864b7ad5b63402b9991945a3e439421 2023-07-19 18:14:46,119 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/info/5864b7ad5b63402b9991945a3e439421 2023-07-19 18:14:46,120 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:46,120 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 18:14:46,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/rep_barrier 2023-07-19 18:14:46,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/rep_barrier 2023-07-19 18:14:46,121 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 18:14:46,129 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9feb493ff59c4c06b1c6e02b388856eb 2023-07-19 18:14:46,129 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/rep_barrier/9feb493ff59c4c06b1c6e02b388856eb 2023-07-19 18:14:46,129 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:46,129 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 18:14:46,130 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/table 2023-07-19 18:14:46,130 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/table 2023-07-19 18:14:46,130 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 18:14:46,138 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 991e47d6fd134ad8a0ac54b0e1086f89 2023-07-19 18:14:46,138 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/table/991e47d6fd134ad8a0ac54b0e1086f89 2023-07-19 18:14:46,139 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:46,140 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740 2023-07-19 18:14:46,141 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740 2023-07-19 18:14:46,145 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 18:14:46,147 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 18:14:46,148 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=99; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11410743360, jitterRate=0.0627082884311676}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 18:14:46,148 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 18:14:46,149 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=79, masterSystemTime=1689790486068 2023-07-19 18:14:46,151 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 18:14:46,151 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 18:14:46,152 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43775,1689790473982, state=OPEN 2023-07-19 18:14:46,153 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 18:14:46,153 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 18:14:46,154 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=9ea4dee563e7f0f7a6c584dc1c5c929d, regionState=CLOSED 2023-07-19 18:14:46,154 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790486154"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790486154"}]},"ts":"1689790486154"} 2023-07-19 18:14:46,155 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40615] ipc.CallRunner(144): callId: 184 service: ClientService methodName: Mutate size: 218 connection: 172.31.14.131:32980 deadline: 1689790546155, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43775 startCode=1689790473982. As of locationSeqNum=95. 2023-07-19 18:14:46,158 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=76 2023-07-19 18:14:46,158 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=76, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43775,1689790473982 in 244 msec 2023-07-19 18:14:46,160 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 763 msec 2023-07-19 18:14:46,263 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-19 18:14:46,263 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; CloseRegionProcedure 9ea4dee563e7f0f7a6c584dc1c5c929d, server=jenkins-hbase4.apache.org,40615,1689790473552 in 861 msec 2023-07-19 18:14:46,263 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=9ea4dee563e7f0f7a6c584dc1c5c929d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:14:46,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-19 18:14:46,414 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=9ea4dee563e7f0f7a6c584dc1c5c929d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:46,414 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790486414"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790486414"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790486414"}]},"ts":"1689790486414"} 2023-07-19 18:14:46,419 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=75, state=RUNNABLE; OpenRegionProcedure 9ea4dee563e7f0f7a6c584dc1c5c929d, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:46,584 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:46,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9ea4dee563e7f0f7a6c584dc1c5c929d, NAME => 'hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:46,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:46,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:46,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:46,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:46,587 INFO [StoreOpener-9ea4dee563e7f0f7a6c584dc1c5c929d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:46,590 DEBUG [StoreOpener-9ea4dee563e7f0f7a6c584dc1c5c929d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/info 2023-07-19 18:14:46,590 DEBUG [StoreOpener-9ea4dee563e7f0f7a6c584dc1c5c929d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/info 2023-07-19 18:14:46,590 INFO [StoreOpener-9ea4dee563e7f0f7a6c584dc1c5c929d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9ea4dee563e7f0f7a6c584dc1c5c929d columnFamilyName info 2023-07-19 18:14:46,607 DEBUG [StoreOpener-9ea4dee563e7f0f7a6c584dc1c5c929d-1] regionserver.HStore(539): loaded hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/info/953e9352d83a4968be54e843c790298b 2023-07-19 18:14:46,607 INFO [StoreOpener-9ea4dee563e7f0f7a6c584dc1c5c929d-1] regionserver.HStore(310): Store=9ea4dee563e7f0f7a6c584dc1c5c929d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:46,608 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:46,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:46,613 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:14:46,614 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9ea4dee563e7f0f7a6c584dc1c5c929d; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11474462720, jitterRate=0.06864261627197266}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:46,614 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9ea4dee563e7f0f7a6c584dc1c5c929d: 2023-07-19 18:14:46,615 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d., pid=80, masterSystemTime=1689790486576 2023-07-19 18:14:46,617 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:46,617 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:14:46,618 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=9ea4dee563e7f0f7a6c584dc1c5c929d, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:46,618 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790486618"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790486618"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790486618"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790486618"}]},"ts":"1689790486618"} 2023-07-19 18:14:46,623 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=75 2023-07-19 18:14:46,623 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=75, state=SUCCESS; OpenRegionProcedure 9ea4dee563e7f0f7a6c584dc1c5c929d, server=jenkins-hbase4.apache.org,43775,1689790473982 in 201 msec 2023-07-19 18:14:46,624 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=9ea4dee563e7f0f7a6c584dc1c5c929d, REOPEN/MOVE in 1.2300 sec 2023-07-19 18:14:47,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799, jenkins-hbase4.apache.org,38419,1689790478179, jenkins-hbase4.apache.org,40615,1689790473552] are moved back to default 2023-07-19 18:14:47,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-19 18:14:47,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:47,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:47,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:47,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-19 18:14:47,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:47,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:14:47,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-19 18:14:47,415 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:14:47,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-19 18:14:47,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-19 18:14:47,418 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:47,419 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 18:14:47,419 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:47,420 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:47,422 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:14:47,425 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c 2023-07-19 18:14:47,425 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c empty. 2023-07-19 18:14:47,426 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c 2023-07-19 18:14:47,426 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-19 18:14:47,451 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-19 18:14:47,458 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9d08d66426997727829ead529e62249c, NAME => 'Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:47,477 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:47,477 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 9d08d66426997727829ead529e62249c, disabling compactions & flushes 2023-07-19 18:14:47,477 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:47,477 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:47,477 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. after waiting 0 ms 2023-07-19 18:14:47,477 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:47,477 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:47,477 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 9d08d66426997727829ead529e62249c: 2023-07-19 18:14:47,480 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:14:47,481 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790487481"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790487481"}]},"ts":"1689790487481"} 2023-07-19 18:14:47,483 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:14:47,484 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:14:47,485 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790487485"}]},"ts":"1689790487485"} 2023-07-19 18:14:47,486 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-19 18:14:47,493 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, ASSIGN}] 2023-07-19 18:14:47,496 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, ASSIGN 2023-07-19 18:14:47,496 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:14:47,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-19 18:14:47,648 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:47,648 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790487648"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790487648"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790487648"}]},"ts":"1689790487648"} 2023-07-19 18:14:47,650 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:47,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-19 18:14:47,731 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 18:14:47,806 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:47,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d08d66426997727829ead529e62249c, NAME => 'Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:47,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 9d08d66426997727829ead529e62249c 2023-07-19 18:14:47,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:47,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9d08d66426997727829ead529e62249c 2023-07-19 18:14:47,807 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9d08d66426997727829ead529e62249c 2023-07-19 18:14:47,808 INFO [StoreOpener-9d08d66426997727829ead529e62249c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9d08d66426997727829ead529e62249c 2023-07-19 18:14:47,809 DEBUG [StoreOpener-9d08d66426997727829ead529e62249c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/f 2023-07-19 18:14:47,809 DEBUG [StoreOpener-9d08d66426997727829ead529e62249c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/f 2023-07-19 18:14:47,810 INFO [StoreOpener-9d08d66426997727829ead529e62249c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d08d66426997727829ead529e62249c columnFamilyName f 2023-07-19 18:14:47,810 INFO [StoreOpener-9d08d66426997727829ead529e62249c-1] regionserver.HStore(310): Store=9d08d66426997727829ead529e62249c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:47,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c 2023-07-19 18:14:47,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c 2023-07-19 18:14:47,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9d08d66426997727829ead529e62249c 2023-07-19 18:14:47,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:47,817 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9d08d66426997727829ead529e62249c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10230322240, jitterRate=-0.04722699522972107}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:47,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9d08d66426997727829ead529e62249c: 2023-07-19 18:14:47,818 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c., pid=83, masterSystemTime=1689790487802 2023-07-19 18:14:47,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:47,819 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:47,820 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:47,820 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790487820"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790487820"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790487820"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790487820"}]},"ts":"1689790487820"} 2023-07-19 18:14:47,823 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-19 18:14:47,823 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,43775,1689790473982 in 171 msec 2023-07-19 18:14:47,825 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-19 18:14:47,825 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, ASSIGN in 330 msec 2023-07-19 18:14:47,826 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:14:47,826 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790487826"}]},"ts":"1689790487826"} 2023-07-19 18:14:47,828 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-19 18:14:47,831 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:14:47,833 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 422 msec 2023-07-19 18:14:48,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-19 18:14:48,020 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-19 18:14:48,021 DEBUG [Listener at localhost/46039] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-19 18:14:48,021 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:48,022 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40615] ipc.CallRunner(144): callId: 277 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:32984 deadline: 1689790548021, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43775 startCode=1689790473982. As of locationSeqNum=95. 2023-07-19 18:14:48,125 DEBUG [hconnection-0xbd8ecb1-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:14:48,127 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55666, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:14:48,135 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-19 18:14:48,135 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:48,135 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-19 18:14:48,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-19 18:14:48,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:48,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 18:14:48,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:48,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:48,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-19 18:14:48,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region 9d08d66426997727829ead529e62249c to RSGroup bar 2023-07-19 18:14:48,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:48,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:48,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:48,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:48,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-19 18:14:48,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:48,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, REOPEN/MOVE 2023-07-19 18:14:48,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-19 18:14:48,145 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, REOPEN/MOVE 2023-07-19 18:14:48,146 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:48,146 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790488146"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790488146"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790488146"}]},"ts":"1689790488146"} 2023-07-19 18:14:48,147 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:48,300 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9d08d66426997727829ead529e62249c 2023-07-19 18:14:48,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9d08d66426997727829ead529e62249c, disabling compactions & flushes 2023-07-19 18:14:48,301 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:48,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:48,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. after waiting 0 ms 2023-07-19 18:14:48,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:48,306 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:48,307 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:48,307 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9d08d66426997727829ead529e62249c: 2023-07-19 18:14:48,307 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9d08d66426997727829ead529e62249c move to jenkins-hbase4.apache.org,38419,1689790478179 record at close sequenceid=2 2023-07-19 18:14:48,309 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9d08d66426997727829ead529e62249c 2023-07-19 18:14:48,309 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=CLOSED 2023-07-19 18:14:48,310 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790488309"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790488309"}]},"ts":"1689790488309"} 2023-07-19 18:14:48,314 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-19 18:14:48,314 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,43775,1689790473982 in 165 msec 2023-07-19 18:14:48,315 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38419,1689790478179; forceNewPlan=false, retain=false 2023-07-19 18:14:48,465 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 18:14:48,465 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:48,466 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790488465"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790488465"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790488465"}]},"ts":"1689790488465"} 2023-07-19 18:14:48,468 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:48,624 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:48,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d08d66426997727829ead529e62249c, NAME => 'Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:48,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 9d08d66426997727829ead529e62249c 2023-07-19 18:14:48,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:48,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9d08d66426997727829ead529e62249c 2023-07-19 18:14:48,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9d08d66426997727829ead529e62249c 2023-07-19 18:14:48,627 INFO [StoreOpener-9d08d66426997727829ead529e62249c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9d08d66426997727829ead529e62249c 2023-07-19 18:14:48,628 DEBUG [StoreOpener-9d08d66426997727829ead529e62249c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/f 2023-07-19 18:14:48,628 DEBUG [StoreOpener-9d08d66426997727829ead529e62249c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/f 2023-07-19 18:14:48,629 INFO [StoreOpener-9d08d66426997727829ead529e62249c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d08d66426997727829ead529e62249c columnFamilyName f 2023-07-19 18:14:48,630 INFO [StoreOpener-9d08d66426997727829ead529e62249c-1] regionserver.HStore(310): Store=9d08d66426997727829ead529e62249c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:48,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c 2023-07-19 18:14:48,632 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c 2023-07-19 18:14:48,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9d08d66426997727829ead529e62249c 2023-07-19 18:14:48,637 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9d08d66426997727829ead529e62249c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11787531680, jitterRate=0.09779943525791168}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:48,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9d08d66426997727829ead529e62249c: 2023-07-19 18:14:48,638 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c., pid=86, masterSystemTime=1689790488619 2023-07-19 18:14:48,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:48,647 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:48,648 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:48,648 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790488648"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790488648"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790488648"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790488648"}]},"ts":"1689790488648"} 2023-07-19 18:14:48,655 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-19 18:14:48,655 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,38419,1689790478179 in 182 msec 2023-07-19 18:14:48,656 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, REOPEN/MOVE in 512 msec 2023-07-19 18:14:49,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-19 18:14:49,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-19 18:14:49,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:49,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:49,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:49,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-19 18:14:49,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:49,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-19 18:14:49,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:49,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:51588 deadline: 1689791689152, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-19 18:14:49,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:38251] to rsgroup default 2023-07-19 18:14:49,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:49,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:51588 deadline: 1689791689154, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-19 18:14:49,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-19 18:14:49,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:49,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 18:14:49,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:49,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:49,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-19 18:14:49,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region 9d08d66426997727829ead529e62249c to RSGroup default 2023-07-19 18:14:49,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, REOPEN/MOVE 2023-07-19 18:14:49,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-19 18:14:49,163 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, REOPEN/MOVE 2023-07-19 18:14:49,164 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:49,164 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790489164"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790489164"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790489164"}]},"ts":"1689790489164"} 2023-07-19 18:14:49,168 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:49,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9d08d66426997727829ead529e62249c 2023-07-19 18:14:49,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9d08d66426997727829ead529e62249c, disabling compactions & flushes 2023-07-19 18:14:49,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:49,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:49,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. after waiting 0 ms 2023-07-19 18:14:49,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:49,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 18:14:49,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:49,330 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9d08d66426997727829ead529e62249c: 2023-07-19 18:14:49,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9d08d66426997727829ead529e62249c move to jenkins-hbase4.apache.org,43775,1689790473982 record at close sequenceid=5 2023-07-19 18:14:49,333 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9d08d66426997727829ead529e62249c 2023-07-19 18:14:49,333 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=CLOSED 2023-07-19 18:14:49,333 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790489333"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790489333"}]},"ts":"1689790489333"} 2023-07-19 18:14:49,336 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-19 18:14:49,337 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,38419,1689790478179 in 169 msec 2023-07-19 18:14:49,337 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:14:49,488 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:49,488 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790489487"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790489487"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790489487"}]},"ts":"1689790489487"} 2023-07-19 18:14:49,490 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:49,647 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:49,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d08d66426997727829ead529e62249c, NAME => 'Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:49,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 9d08d66426997727829ead529e62249c 2023-07-19 18:14:49,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:49,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9d08d66426997727829ead529e62249c 2023-07-19 18:14:49,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9d08d66426997727829ead529e62249c 2023-07-19 18:14:49,649 INFO [StoreOpener-9d08d66426997727829ead529e62249c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9d08d66426997727829ead529e62249c 2023-07-19 18:14:49,651 DEBUG [StoreOpener-9d08d66426997727829ead529e62249c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/f 2023-07-19 18:14:49,651 DEBUG [StoreOpener-9d08d66426997727829ead529e62249c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/f 2023-07-19 18:14:49,651 INFO [StoreOpener-9d08d66426997727829ead529e62249c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d08d66426997727829ead529e62249c columnFamilyName f 2023-07-19 18:14:49,652 INFO [StoreOpener-9d08d66426997727829ead529e62249c-1] regionserver.HStore(310): Store=9d08d66426997727829ead529e62249c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:49,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c 2023-07-19 18:14:49,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c 2023-07-19 18:14:49,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9d08d66426997727829ead529e62249c 2023-07-19 18:14:49,659 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9d08d66426997727829ead529e62249c; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10248867680, jitterRate=-0.04549981653690338}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:49,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9d08d66426997727829ead529e62249c: 2023-07-19 18:14:49,660 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c., pid=89, masterSystemTime=1689790489641 2023-07-19 18:14:49,662 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:49,662 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:49,663 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:49,663 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790489663"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790489663"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790489663"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790489663"}]},"ts":"1689790489663"} 2023-07-19 18:14:49,671 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-19 18:14:49,671 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,43775,1689790473982 in 176 msec 2023-07-19 18:14:49,674 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, REOPEN/MOVE in 509 msec 2023-07-19 18:14:50,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-19 18:14:50,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-19 18:14:50,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:50,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-19 18:14:50,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:50,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 296 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:51588 deadline: 1689791690170, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-19 18:14:50,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:38251] to rsgroup default 2023-07-19 18:14:50,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:50,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 18:14:50,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:50,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:50,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-19 18:14:50,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799, jenkins-hbase4.apache.org,38419,1689790478179, jenkins-hbase4.apache.org,40615,1689790473552] are moved back to bar 2023-07-19 18:14:50,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-19 18:14:50,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:50,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-19 18:14:50,186 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40615] ipc.CallRunner(144): callId: 212 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:32980 deadline: 1689790550186, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43775 startCode=1689790473982. As of locationSeqNum=6. 2023-07-19 18:14:50,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:50,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:50,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 18:14:50,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:50,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,310 INFO [Listener at localhost/46039] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-19 18:14:50,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-19 18:14:50,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-19 18:14:50,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-19 18:14:50,318 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790490318"}]},"ts":"1689790490318"} 2023-07-19 18:14:50,321 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-19 18:14:50,323 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-19 18:14:50,323 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, UNASSIGN}] 2023-07-19 18:14:50,326 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, UNASSIGN 2023-07-19 18:14:50,330 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:50,330 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790490330"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790490330"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790490330"}]},"ts":"1689790490330"} 2023-07-19 18:14:50,333 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:50,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-19 18:14:50,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9d08d66426997727829ead529e62249c 2023-07-19 18:14:50,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9d08d66426997727829ead529e62249c, disabling compactions & flushes 2023-07-19 18:14:50,488 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:50,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:50,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. after waiting 0 ms 2023-07-19 18:14:50,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:50,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-19 18:14:50,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c. 2023-07-19 18:14:50,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9d08d66426997727829ead529e62249c: 2023-07-19 18:14:50,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9d08d66426997727829ead529e62249c 2023-07-19 18:14:50,498 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=9d08d66426997727829ead529e62249c, regionState=CLOSED 2023-07-19 18:14:50,498 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689790490498"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790490498"}]},"ts":"1689790490498"} 2023-07-19 18:14:50,505 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-19 18:14:50,505 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure 9d08d66426997727829ead529e62249c, server=jenkins-hbase4.apache.org,43775,1689790473982 in 167 msec 2023-07-19 18:14:50,507 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-19 18:14:50,507 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d08d66426997727829ead529e62249c, UNASSIGN in 182 msec 2023-07-19 18:14:50,508 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790490508"}]},"ts":"1689790490508"} 2023-07-19 18:14:50,512 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-19 18:14:50,514 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-19 18:14:50,518 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 205 msec 2023-07-19 18:14:50,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-19 18:14:50,618 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-19 18:14:50,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-19 18:14:50,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 18:14:50,621 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 18:14:50,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-19 18:14:50,623 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 18:14:50,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:50,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:50,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:50,629 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c 2023-07-19 18:14:50,631 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/recovered.edits] 2023-07-19 18:14:50,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-19 18:14:50,639 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/recovered.edits/10.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c/recovered.edits/10.seqid 2023-07-19 18:14:50,640 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testFailRemoveGroup/9d08d66426997727829ead529e62249c 2023-07-19 18:14:50,640 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-19 18:14:50,643 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 18:14:50,652 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-19 18:14:50,656 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-19 18:14:50,658 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 18:14:50,658 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-19 18:14:50,658 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790490658"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:50,660 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 18:14:50,660 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9d08d66426997727829ead529e62249c, NAME => 'Group_testFailRemoveGroup,,1689790487410.9d08d66426997727829ead529e62249c.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 18:14:50,660 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-19 18:14:50,661 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689790490661"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:50,663 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-19 18:14:50,666 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 18:14:50,668 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 47 msec 2023-07-19 18:14:50,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-19 18:14:50,733 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-19 18:14:50,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:50,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:50,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:50,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:50,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:50,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:14:50,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:50,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:14:50,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:50,753 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:14:50,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:50,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:50,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:50,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:50,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:50,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:14:50,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:50,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 344 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791690767, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:14:50,767 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:14:50,769 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:50,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,770 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:14:50,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:50,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:50,790 INFO [Listener at localhost/46039] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=513 (was 498) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-741684861_17 at /127.0.0.1:51374 [Receiving block BP-1139031693-172.31.14.131-1689790468337:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1139031693-172.31.14.131-1689790468337:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-741684861_17 at /127.0.0.1:51404 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2006395679_17 at /127.0.0.1:43334 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-741684861_17 at /127.0.0.1:49038 [Receiving block BP-1139031693-172.31.14.131-1689790468337:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1139031693-172.31.14.131-1689790468337:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1139031693-172.31.14.131-1689790468337:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475-prefix:jenkins-hbase4.apache.org,43775,1689790473982.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-741684861_17 at /127.0.0.1:43318 [Receiving block BP-1139031693-172.31.14.131-1689790468337:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2006395679_17 at /127.0.0.1:49082 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xbd8ecb1-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=798 (was 786) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=503 (was 538), ProcessCount=173 (was 173), AvailableMemoryMB=2904 (was 3072) 2023-07-19 18:14:50,792 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-19 18:14:50,813 INFO [Listener at localhost/46039] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=513, OpenFileDescriptor=798, MaxFileDescriptor=60000, SystemLoadAverage=503, ProcessCount=173, AvailableMemoryMB=2903 2023-07-19 18:14:50,813 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-19 18:14:50,813 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-19 18:14:50,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:50,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:50,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:50,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:50,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:50,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:14:50,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:50,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:14:50,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:50,834 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:14:50,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:50,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:50,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:50,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:50,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:50,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:14:50,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:50,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 372 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791690851, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:14:50,852 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:14:50,858 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:50,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,860 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:14:50,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:50,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:50,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:50,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:50,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1985679662 2023-07-19 18:14:50,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1985679662 2023-07-19 18:14:50,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:50,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:50,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:50,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:50,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38251] to rsgroup Group_testMultiTableMove_1985679662 2023-07-19 18:14:50,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1985679662 2023-07-19 18:14:50,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:50,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:50,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:50,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 18:14:50,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799] are moved back to default 2023-07-19 18:14:50,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1985679662 2023-07-19 18:14:50,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:50,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:50,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:50,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1985679662 2023-07-19 18:14:50,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:50,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:14:50,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 18:14:50,907 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:14:50,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-19 18:14:50,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-19 18:14:50,909 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1985679662 2023-07-19 18:14:50,910 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:50,911 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:50,911 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:50,918 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:14:50,920 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:50,921 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950 empty. 2023-07-19 18:14:50,921 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:50,921 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-19 18:14:50,937 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-19 18:14:50,938 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => b1e7daf750dc9f4436ca9d29117b3950, NAME => 'GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:50,951 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:50,952 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing b1e7daf750dc9f4436ca9d29117b3950, disabling compactions & flushes 2023-07-19 18:14:50,952 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:50,952 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:50,952 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. after waiting 0 ms 2023-07-19 18:14:50,952 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:50,952 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:50,952 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for b1e7daf750dc9f4436ca9d29117b3950: 2023-07-19 18:14:50,954 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:14:50,955 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790490955"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790490955"}]},"ts":"1689790490955"} 2023-07-19 18:14:50,957 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:14:50,961 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:14:50,962 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790490962"}]},"ts":"1689790490962"} 2023-07-19 18:14:50,963 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-19 18:14:50,968 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:50,968 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:50,968 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:50,968 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:50,968 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:50,968 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=b1e7daf750dc9f4436ca9d29117b3950, ASSIGN}] 2023-07-19 18:14:50,970 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=b1e7daf750dc9f4436ca9d29117b3950, ASSIGN 2023-07-19 18:14:50,971 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=b1e7daf750dc9f4436ca9d29117b3950, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:14:51,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-19 18:14:51,121 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 18:14:51,127 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=b1e7daf750dc9f4436ca9d29117b3950, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:51,127 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790491127"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790491127"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790491127"}]},"ts":"1689790491127"} 2023-07-19 18:14:51,130 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure b1e7daf750dc9f4436ca9d29117b3950, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:51,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-19 18:14:51,287 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:51,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b1e7daf750dc9f4436ca9d29117b3950, NAME => 'GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:51,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:51,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:51,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:51,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:51,289 INFO [StoreOpener-b1e7daf750dc9f4436ca9d29117b3950-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:51,290 DEBUG [StoreOpener-b1e7daf750dc9f4436ca9d29117b3950-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950/f 2023-07-19 18:14:51,291 DEBUG [StoreOpener-b1e7daf750dc9f4436ca9d29117b3950-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950/f 2023-07-19 18:14:51,291 INFO [StoreOpener-b1e7daf750dc9f4436ca9d29117b3950-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b1e7daf750dc9f4436ca9d29117b3950 columnFamilyName f 2023-07-19 18:14:51,292 INFO [StoreOpener-b1e7daf750dc9f4436ca9d29117b3950-1] regionserver.HStore(310): Store=b1e7daf750dc9f4436ca9d29117b3950/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:51,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:51,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:51,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:51,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:51,299 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b1e7daf750dc9f4436ca9d29117b3950; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11980279200, jitterRate=0.11575044691562653}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:51,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b1e7daf750dc9f4436ca9d29117b3950: 2023-07-19 18:14:51,300 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950., pid=96, masterSystemTime=1689790491283 2023-07-19 18:14:51,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:51,301 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:51,302 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=b1e7daf750dc9f4436ca9d29117b3950, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:51,302 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790491302"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790491302"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790491302"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790491302"}]},"ts":"1689790491302"} 2023-07-19 18:14:51,306 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-19 18:14:51,306 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure b1e7daf750dc9f4436ca9d29117b3950, server=jenkins-hbase4.apache.org,43775,1689790473982 in 174 msec 2023-07-19 18:14:51,308 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-19 18:14:51,308 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=b1e7daf750dc9f4436ca9d29117b3950, ASSIGN in 338 msec 2023-07-19 18:14:51,308 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:14:51,308 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790491308"}]},"ts":"1689790491308"} 2023-07-19 18:14:51,312 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-19 18:14:51,314 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:14:51,315 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 411 msec 2023-07-19 18:14:51,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-19 18:14:51,512 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-19 18:14:51,512 DEBUG [Listener at localhost/46039] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-19 18:14:51,512 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:51,517 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-19 18:14:51,517 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:51,517 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-19 18:14:51,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:14:51,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 18:14:51,522 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:14:51,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-19 18:14:51,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 18:14:51,525 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1985679662 2023-07-19 18:14:51,526 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:51,527 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:51,527 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:51,530 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:14:51,532 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:51,533 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb empty. 2023-07-19 18:14:51,533 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:51,534 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-19 18:14:51,553 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-19 18:14:51,554 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9716eb9455e26619fa0563b6ea7cedcb, NAME => 'GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:51,569 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:51,569 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 9716eb9455e26619fa0563b6ea7cedcb, disabling compactions & flushes 2023-07-19 18:14:51,569 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:51,570 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:51,570 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. after waiting 0 ms 2023-07-19 18:14:51,570 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:51,570 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:51,570 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 9716eb9455e26619fa0563b6ea7cedcb: 2023-07-19 18:14:51,572 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:14:51,573 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790491573"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790491573"}]},"ts":"1689790491573"} 2023-07-19 18:14:51,575 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:14:51,576 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:14:51,576 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790491576"}]},"ts":"1689790491576"} 2023-07-19 18:14:51,578 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-19 18:14:51,586 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:51,586 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:51,586 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:51,586 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:51,587 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:51,587 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9716eb9455e26619fa0563b6ea7cedcb, ASSIGN}] 2023-07-19 18:14:51,589 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9716eb9455e26619fa0563b6ea7cedcb, ASSIGN 2023-07-19 18:14:51,590 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9716eb9455e26619fa0563b6ea7cedcb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:14:51,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 18:14:51,740 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 18:14:51,742 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=9716eb9455e26619fa0563b6ea7cedcb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:51,742 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790491742"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790491742"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790491742"}]},"ts":"1689790491742"} 2023-07-19 18:14:51,744 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 9716eb9455e26619fa0563b6ea7cedcb, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:51,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 18:14:51,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:51,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9716eb9455e26619fa0563b6ea7cedcb, NAME => 'GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:51,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:51,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:51,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:51,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:51,916 INFO [StoreOpener-9716eb9455e26619fa0563b6ea7cedcb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:51,918 DEBUG [StoreOpener-9716eb9455e26619fa0563b6ea7cedcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb/f 2023-07-19 18:14:51,918 DEBUG [StoreOpener-9716eb9455e26619fa0563b6ea7cedcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb/f 2023-07-19 18:14:51,919 INFO [StoreOpener-9716eb9455e26619fa0563b6ea7cedcb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9716eb9455e26619fa0563b6ea7cedcb columnFamilyName f 2023-07-19 18:14:51,920 INFO [StoreOpener-9716eb9455e26619fa0563b6ea7cedcb-1] regionserver.HStore(310): Store=9716eb9455e26619fa0563b6ea7cedcb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:51,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:51,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:51,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:51,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:51,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9716eb9455e26619fa0563b6ea7cedcb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10507347200, jitterRate=-0.021427035331726074}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:51,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9716eb9455e26619fa0563b6ea7cedcb: 2023-07-19 18:14:51,943 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb., pid=99, masterSystemTime=1689790491897 2023-07-19 18:14:51,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:51,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:51,946 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=9716eb9455e26619fa0563b6ea7cedcb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:51,947 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790491946"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790491946"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790491946"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790491946"}]},"ts":"1689790491946"} 2023-07-19 18:14:51,953 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-19 18:14:51,953 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 9716eb9455e26619fa0563b6ea7cedcb, server=jenkins-hbase4.apache.org,43775,1689790473982 in 204 msec 2023-07-19 18:14:51,956 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-19 18:14:51,956 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9716eb9455e26619fa0563b6ea7cedcb, ASSIGN in 366 msec 2023-07-19 18:14:51,957 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:14:51,957 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790491957"}]},"ts":"1689790491957"} 2023-07-19 18:14:51,960 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-19 18:14:51,963 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:14:51,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 444 msec 2023-07-19 18:14:52,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 18:14:52,128 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-19 18:14:52,128 DEBUG [Listener at localhost/46039] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-19 18:14:52,128 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:52,133 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-19 18:14:52,133 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:52,133 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-19 18:14:52,134 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:52,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-19 18:14:52,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:52,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-19 18:14:52,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:52,150 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1985679662 2023-07-19 18:14:52,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1985679662 2023-07-19 18:14:52,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1985679662 2023-07-19 18:14:52,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:52,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:52,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:52,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1985679662 2023-07-19 18:14:52,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region 9716eb9455e26619fa0563b6ea7cedcb to RSGroup Group_testMultiTableMove_1985679662 2023-07-19 18:14:52,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9716eb9455e26619fa0563b6ea7cedcb, REOPEN/MOVE 2023-07-19 18:14:52,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1985679662 2023-07-19 18:14:52,369 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9716eb9455e26619fa0563b6ea7cedcb, REOPEN/MOVE 2023-07-19 18:14:52,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region b1e7daf750dc9f4436ca9d29117b3950 to RSGroup Group_testMultiTableMove_1985679662 2023-07-19 18:14:52,370 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=9716eb9455e26619fa0563b6ea7cedcb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:52,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=b1e7daf750dc9f4436ca9d29117b3950, REOPEN/MOVE 2023-07-19 18:14:52,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1985679662, current retry=0 2023-07-19 18:14:52,371 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=b1e7daf750dc9f4436ca9d29117b3950, REOPEN/MOVE 2023-07-19 18:14:52,371 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790492370"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790492370"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790492370"}]},"ts":"1689790492370"} 2023-07-19 18:14:52,372 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=b1e7daf750dc9f4436ca9d29117b3950, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:52,372 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790492372"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790492372"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790492372"}]},"ts":"1689790492372"} 2023-07-19 18:14:52,373 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 9716eb9455e26619fa0563b6ea7cedcb, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:52,374 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure b1e7daf750dc9f4436ca9d29117b3950, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:52,528 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:52,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9716eb9455e26619fa0563b6ea7cedcb, disabling compactions & flushes 2023-07-19 18:14:52,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:52,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:52,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. after waiting 0 ms 2023-07-19 18:14:52,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:52,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:52,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:52,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9716eb9455e26619fa0563b6ea7cedcb: 2023-07-19 18:14:52,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9716eb9455e26619fa0563b6ea7cedcb move to jenkins-hbase4.apache.org,38251,1689790473799 record at close sequenceid=2 2023-07-19 18:14:52,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:52,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:52,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b1e7daf750dc9f4436ca9d29117b3950, disabling compactions & flushes 2023-07-19 18:14:52,545 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:52,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:52,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. after waiting 0 ms 2023-07-19 18:14:52,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:52,545 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=9716eb9455e26619fa0563b6ea7cedcb, regionState=CLOSED 2023-07-19 18:14:52,545 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790492545"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790492545"}]},"ts":"1689790492545"} 2023-07-19 18:14:52,549 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-19 18:14:52,550 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 9716eb9455e26619fa0563b6ea7cedcb, server=jenkins-hbase4.apache.org,43775,1689790473982 in 175 msec 2023-07-19 18:14:52,551 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9716eb9455e26619fa0563b6ea7cedcb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38251,1689790473799; forceNewPlan=false, retain=false 2023-07-19 18:14:52,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:52,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:52,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b1e7daf750dc9f4436ca9d29117b3950: 2023-07-19 18:14:52,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b1e7daf750dc9f4436ca9d29117b3950 move to jenkins-hbase4.apache.org,38251,1689790473799 record at close sequenceid=2 2023-07-19 18:14:52,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:52,564 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=b1e7daf750dc9f4436ca9d29117b3950, regionState=CLOSED 2023-07-19 18:14:52,564 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790492564"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790492564"}]},"ts":"1689790492564"} 2023-07-19 18:14:52,571 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-19 18:14:52,571 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure b1e7daf750dc9f4436ca9d29117b3950, server=jenkins-hbase4.apache.org,43775,1689790473982 in 192 msec 2023-07-19 18:14:52,572 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=b1e7daf750dc9f4436ca9d29117b3950, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38251,1689790473799; forceNewPlan=false, retain=false 2023-07-19 18:14:52,701 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=9716eb9455e26619fa0563b6ea7cedcb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:52,701 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=b1e7daf750dc9f4436ca9d29117b3950, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:52,701 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790492701"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790492701"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790492701"}]},"ts":"1689790492701"} 2023-07-19 18:14:52,702 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790492701"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790492701"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790492701"}]},"ts":"1689790492701"} 2023-07-19 18:14:52,703 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure 9716eb9455e26619fa0563b6ea7cedcb, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:52,704 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure b1e7daf750dc9f4436ca9d29117b3950, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:52,861 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:52,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b1e7daf750dc9f4436ca9d29117b3950, NAME => 'GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:52,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:52,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:52,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:52,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:52,869 INFO [StoreOpener-b1e7daf750dc9f4436ca9d29117b3950-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:52,870 DEBUG [StoreOpener-b1e7daf750dc9f4436ca9d29117b3950-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950/f 2023-07-19 18:14:52,870 DEBUG [StoreOpener-b1e7daf750dc9f4436ca9d29117b3950-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950/f 2023-07-19 18:14:52,871 INFO [StoreOpener-b1e7daf750dc9f4436ca9d29117b3950-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b1e7daf750dc9f4436ca9d29117b3950 columnFamilyName f 2023-07-19 18:14:52,872 INFO [StoreOpener-b1e7daf750dc9f4436ca9d29117b3950-1] regionserver.HStore(310): Store=b1e7daf750dc9f4436ca9d29117b3950/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:52,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:52,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:52,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:52,882 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b1e7daf750dc9f4436ca9d29117b3950; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10902557920, jitterRate=0.015379831194877625}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:52,882 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b1e7daf750dc9f4436ca9d29117b3950: 2023-07-19 18:14:52,883 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950., pid=105, masterSystemTime=1689790492856 2023-07-19 18:14:52,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:52,886 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=b1e7daf750dc9f4436ca9d29117b3950, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:52,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:52,886 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790492886"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790492886"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790492886"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790492886"}]},"ts":"1689790492886"} 2023-07-19 18:14:52,887 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:52,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9716eb9455e26619fa0563b6ea7cedcb, NAME => 'GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:52,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:52,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:52,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:52,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:52,894 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-19 18:14:52,894 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure b1e7daf750dc9f4436ca9d29117b3950, server=jenkins-hbase4.apache.org,38251,1689790473799 in 187 msec 2023-07-19 18:14:52,895 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=b1e7daf750dc9f4436ca9d29117b3950, REOPEN/MOVE in 525 msec 2023-07-19 18:14:52,899 INFO [StoreOpener-9716eb9455e26619fa0563b6ea7cedcb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:52,900 DEBUG [StoreOpener-9716eb9455e26619fa0563b6ea7cedcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb/f 2023-07-19 18:14:52,900 DEBUG [StoreOpener-9716eb9455e26619fa0563b6ea7cedcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb/f 2023-07-19 18:14:52,901 INFO [StoreOpener-9716eb9455e26619fa0563b6ea7cedcb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9716eb9455e26619fa0563b6ea7cedcb columnFamilyName f 2023-07-19 18:14:52,901 INFO [StoreOpener-9716eb9455e26619fa0563b6ea7cedcb-1] regionserver.HStore(310): Store=9716eb9455e26619fa0563b6ea7cedcb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:52,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:52,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:52,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:52,908 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9716eb9455e26619fa0563b6ea7cedcb; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11196754560, jitterRate=0.04277902841567993}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:52,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9716eb9455e26619fa0563b6ea7cedcb: 2023-07-19 18:14:52,909 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb., pid=104, masterSystemTime=1689790492856 2023-07-19 18:14:52,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:52,911 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:52,912 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=9716eb9455e26619fa0563b6ea7cedcb, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:52,912 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790492912"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790492912"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790492912"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790492912"}]},"ts":"1689790492912"} 2023-07-19 18:14:52,916 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-19 18:14:52,916 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure 9716eb9455e26619fa0563b6ea7cedcb, server=jenkins-hbase4.apache.org,38251,1689790473799 in 211 msec 2023-07-19 18:14:52,917 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9716eb9455e26619fa0563b6ea7cedcb, REOPEN/MOVE in 550 msec 2023-07-19 18:14:53,309 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 18:14:53,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-19 18:14:53,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1985679662. 2023-07-19 18:14:53,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:53,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:53,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:53,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-19 18:14:53,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:53,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-19 18:14:53,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:53,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:53,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:53,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1985679662 2023-07-19 18:14:53,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:53,387 INFO [Listener at localhost/46039] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-19 18:14:53,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-19 18:14:53,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 18:14:53,393 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790493393"}]},"ts":"1689790493393"} 2023-07-19 18:14:53,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-19 18:14:53,395 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-19 18:14:53,397 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-19 18:14:53,401 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=b1e7daf750dc9f4436ca9d29117b3950, UNASSIGN}] 2023-07-19 18:14:53,403 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=b1e7daf750dc9f4436ca9d29117b3950, UNASSIGN 2023-07-19 18:14:53,404 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=b1e7daf750dc9f4436ca9d29117b3950, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:53,404 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790493404"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790493404"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790493404"}]},"ts":"1689790493404"} 2023-07-19 18:14:53,406 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure b1e7daf750dc9f4436ca9d29117b3950, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:53,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-19 18:14:53,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:53,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b1e7daf750dc9f4436ca9d29117b3950, disabling compactions & flushes 2023-07-19 18:14:53,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:53,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:53,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. after waiting 0 ms 2023-07-19 18:14:53,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:53,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 18:14:53,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950. 2023-07-19 18:14:53,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b1e7daf750dc9f4436ca9d29117b3950: 2023-07-19 18:14:53,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:53,569 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=b1e7daf750dc9f4436ca9d29117b3950, regionState=CLOSED 2023-07-19 18:14:53,569 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790493569"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790493569"}]},"ts":"1689790493569"} 2023-07-19 18:14:53,572 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-19 18:14:53,572 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure b1e7daf750dc9f4436ca9d29117b3950, server=jenkins-hbase4.apache.org,38251,1689790473799 in 164 msec 2023-07-19 18:14:53,573 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-19 18:14:53,573 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=b1e7daf750dc9f4436ca9d29117b3950, UNASSIGN in 174 msec 2023-07-19 18:14:53,574 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790493574"}]},"ts":"1689790493574"} 2023-07-19 18:14:53,575 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-19 18:14:53,583 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-19 18:14:53,585 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 196 msec 2023-07-19 18:14:53,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-19 18:14:53,696 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-19 18:14:53,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-19 18:14:53,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 18:14:53,699 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 18:14:53,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1985679662' 2023-07-19 18:14:53,700 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 18:14:53,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1985679662 2023-07-19 18:14:53,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:53,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:53,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:53,704 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:53,706 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950/recovered.edits] 2023-07-19 18:14:53,711 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950/recovered.edits/7.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950/recovered.edits/7.seqid 2023-07-19 18:14:53,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-19 18:14:53,711 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveA/b1e7daf750dc9f4436ca9d29117b3950 2023-07-19 18:14:53,711 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-19 18:14:53,714 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 18:14:53,716 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-19 18:14:53,718 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-19 18:14:53,719 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 18:14:53,719 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-19 18:14:53,719 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790493719"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:53,720 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 18:14:53,721 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b1e7daf750dc9f4436ca9d29117b3950, NAME => 'GrouptestMultiTableMoveA,,1689790490903.b1e7daf750dc9f4436ca9d29117b3950.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 18:14:53,721 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-19 18:14:53,721 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689790493721"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:53,722 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-19 18:14:53,730 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 18:14:53,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 33 msec 2023-07-19 18:14:53,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-19 18:14:53,813 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-19 18:14:53,813 INFO [Listener at localhost/46039] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-19 18:14:53,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-19 18:14:53,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 18:14:53,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-19 18:14:53,818 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790493818"}]},"ts":"1689790493818"} 2023-07-19 18:14:53,819 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-19 18:14:53,821 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-19 18:14:53,822 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9716eb9455e26619fa0563b6ea7cedcb, UNASSIGN}] 2023-07-19 18:14:53,824 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9716eb9455e26619fa0563b6ea7cedcb, UNASSIGN 2023-07-19 18:14:53,824 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=9716eb9455e26619fa0563b6ea7cedcb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:14:53,824 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790493824"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790493824"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790493824"}]},"ts":"1689790493824"} 2023-07-19 18:14:53,826 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 9716eb9455e26619fa0563b6ea7cedcb, server=jenkins-hbase4.apache.org,38251,1689790473799}] 2023-07-19 18:14:53,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-19 18:14:53,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:53,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9716eb9455e26619fa0563b6ea7cedcb, disabling compactions & flushes 2023-07-19 18:14:53,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:53,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:53,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. after waiting 0 ms 2023-07-19 18:14:53,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:53,987 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 18:14:53,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb. 2023-07-19 18:14:53,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9716eb9455e26619fa0563b6ea7cedcb: 2023-07-19 18:14:53,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:53,992 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=9716eb9455e26619fa0563b6ea7cedcb, regionState=CLOSED 2023-07-19 18:14:53,992 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689790493992"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790493992"}]},"ts":"1689790493992"} 2023-07-19 18:14:53,996 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-19 18:14:53,996 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 9716eb9455e26619fa0563b6ea7cedcb, server=jenkins-hbase4.apache.org,38251,1689790473799 in 168 msec 2023-07-19 18:14:53,999 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-19 18:14:53,999 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9716eb9455e26619fa0563b6ea7cedcb, UNASSIGN in 174 msec 2023-07-19 18:14:54,000 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790494000"}]},"ts":"1689790494000"} 2023-07-19 18:14:54,002 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-19 18:14:54,004 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-19 18:14:54,006 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 191 msec 2023-07-19 18:14:54,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-19 18:14:54,121 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-19 18:14:54,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-19 18:14:54,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 18:14:54,124 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 18:14:54,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1985679662' 2023-07-19 18:14:54,125 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 18:14:54,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1985679662 2023-07-19 18:14:54,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,130 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:54,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:54,132 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb/recovered.edits] 2023-07-19 18:14:54,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-19 18:14:54,140 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb/recovered.edits/7.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb/recovered.edits/7.seqid 2023-07-19 18:14:54,141 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/GrouptestMultiTableMoveB/9716eb9455e26619fa0563b6ea7cedcb 2023-07-19 18:14:54,141 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-19 18:14:54,144 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 18:14:54,147 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-19 18:14:54,149 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-19 18:14:54,151 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 18:14:54,151 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-19 18:14:54,151 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790494151"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:54,153 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 18:14:54,153 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9716eb9455e26619fa0563b6ea7cedcb, NAME => 'GrouptestMultiTableMoveB,,1689790491519.9716eb9455e26619fa0563b6ea7cedcb.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 18:14:54,153 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-19 18:14:54,153 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689790494153"}]},"ts":"9223372036854775807"} 2023-07-19 18:14:54,156 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-19 18:14:54,158 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 18:14:54,160 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 37 msec 2023-07-19 18:14:54,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-19 18:14:54,238 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-19 18:14:54,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:54,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:54,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:54,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38251] to rsgroup default 2023-07-19 18:14:54,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1985679662 2023-07-19 18:14:54,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:54,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1985679662, current retry=0 2023-07-19 18:14:54,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799] are moved back to Group_testMultiTableMove_1985679662 2023-07-19 18:14:54,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1985679662 => default 2023-07-19 18:14:54,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:54,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1985679662 2023-07-19 18:14:54,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 18:14:54,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:54,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:54,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:54,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:54,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:54,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:54,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:14:54,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:14:54,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:54,270 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:14:54,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:54,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:54,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:54,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:14:54,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:54,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 510 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791694284, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:14:54,285 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:14:54,286 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:54,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,288 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:14:54,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:54,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,314 INFO [Listener at localhost/46039] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=511 (was 513), OpenFileDescriptor=788 (was 798), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=463 (was 503), ProcessCount=173 (was 173), AvailableMemoryMB=2639 (was 2903) 2023-07-19 18:14:54,314 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-19 18:14:54,334 INFO [Listener at localhost/46039] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=510, OpenFileDescriptor=788, MaxFileDescriptor=60000, SystemLoadAverage=463, ProcessCount=173, AvailableMemoryMB=2639 2023-07-19 18:14:54,334 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-19 18:14:54,334 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-19 18:14:54,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:54,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:54,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:54,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:54,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:54,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:14:54,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:14:54,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:54,352 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:14:54,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:54,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:54,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:54,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:14:54,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:54,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 538 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791694364, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:14:54,365 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:14:54,366 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:54,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,368 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:14:54,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:54,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:54,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-19 18:14:54,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 18:14:54,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:54,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:54,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:38251] to rsgroup oldGroup 2023-07-19 18:14:54,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 18:14:54,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:54,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 18:14:54,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799, jenkins-hbase4.apache.org,38419,1689790478179] are moved back to default 2023-07-19 18:14:54,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-19 18:14:54,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:54,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-19 18:14:54,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-19 18:14:54,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:54,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-19 18:14:54,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-19 18:14:54,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 18:14:54,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 18:14:54,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:54,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40615] to rsgroup anotherRSGroup 2023-07-19 18:14:54,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-19 18:14:54,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 18:14:54,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 18:14:54,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 18:14:54,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40615,1689790473552] are moved back to default 2023-07-19 18:14:54,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-19 18:14:54,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:54,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-19 18:14:54,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-19 18:14:54,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-19 18:14:54,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:54,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:51588 deadline: 1689791694431, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-19 18:14:54,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-19 18:14:54,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:54,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:51588 deadline: 1689791694433, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-19 18:14:54,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-19 18:14:54,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:54,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:51588 deadline: 1689791694435, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-19 18:14:54,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-19 18:14:54,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:54,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:51588 deadline: 1689791694435, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-19 18:14:54,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:54,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:54,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:54,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40615] to rsgroup default 2023-07-19 18:14:54,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-19 18:14:54,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 18:14:54,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 18:14:54,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-19 18:14:54,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40615,1689790473552] are moved back to anotherRSGroup 2023-07-19 18:14:54,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-19 18:14:54,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:54,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-19 18:14:54,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 18:14:54,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-19 18:14:54,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:54,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:54,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:54,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:54,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:38251] to rsgroup default 2023-07-19 18:14:54,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 18:14:54,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:54,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-19 18:14:54,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799, jenkins-hbase4.apache.org,38419,1689790478179] are moved back to oldGroup 2023-07-19 18:14:54,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-19 18:14:54,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:54,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-19 18:14:54,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 18:14:54,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:54,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:54,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:54,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:54,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:54,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:54,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:14:54,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:14:54,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:54,485 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:14:54,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:54,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:54,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:54,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:14:54,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:54,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 614 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791694496, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:14:54,497 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:14:54,498 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:54,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,499 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:14:54,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:54,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,519 INFO [Listener at localhost/46039] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=514 (was 510) Potentially hanging thread: hconnection-0x22934466-shared-pool-21 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=788 (was 788), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=463 (was 463), ProcessCount=173 (was 173), AvailableMemoryMB=2642 (was 2639) - AvailableMemoryMB LEAK? - 2023-07-19 18:14:54,522 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-19 18:14:54,539 INFO [Listener at localhost/46039] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=513, OpenFileDescriptor=788, MaxFileDescriptor=60000, SystemLoadAverage=463, ProcessCount=173, AvailableMemoryMB=2642 2023-07-19 18:14:54,539 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-19 18:14:54,539 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-19 18:14:54,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:54,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:54,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:54,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:54,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:54,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:14:54,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:14:54,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:54,556 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:14:54,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:54,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:54,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:54,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:14:54,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:14:54,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 642 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791694569, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:14:54,570 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:14:54,572 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:54,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,573 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:14:54,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:54,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:54,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-19 18:14:54,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 18:14:54,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:54,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:54,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:38251] to rsgroup oldgroup 2023-07-19 18:14:54,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 18:14:54,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:54,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 18:14:54,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799, jenkins-hbase4.apache.org,38419,1689790478179] are moved back to default 2023-07-19 18:14:54,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-19 18:14:54,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:54,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:54,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:54,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-19 18:14:54,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:54,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:14:54,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-19 18:14:54,606 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:14:54,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-19 18:14:54,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-19 18:14:54,609 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 18:14:54,609 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:54,609 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:54,610 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:54,614 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:14:54,615 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/testRename/4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:54,616 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/testRename/4804c008141c21a22bec55f72429fc21 empty. 2023-07-19 18:14:54,616 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/testRename/4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:54,616 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-19 18:14:54,632 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-19 18:14:54,633 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4804c008141c21a22bec55f72429fc21, NAME => 'testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:54,644 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:54,644 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 4804c008141c21a22bec55f72429fc21, disabling compactions & flushes 2023-07-19 18:14:54,644 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:54,645 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:54,645 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. after waiting 0 ms 2023-07-19 18:14:54,645 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:54,645 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:54,645 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 4804c008141c21a22bec55f72429fc21: 2023-07-19 18:14:54,647 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:14:54,648 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689790494648"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790494648"}]},"ts":"1689790494648"} 2023-07-19 18:14:54,650 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:14:54,651 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:14:54,652 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790494652"}]},"ts":"1689790494652"} 2023-07-19 18:14:54,653 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-19 18:14:54,656 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:54,656 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:54,656 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:54,656 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:54,657 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, ASSIGN}] 2023-07-19 18:14:54,658 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, ASSIGN 2023-07-19 18:14:54,659 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40615,1689790473552; forceNewPlan=false, retain=false 2023-07-19 18:14:54,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-19 18:14:54,809 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 18:14:54,811 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=4804c008141c21a22bec55f72429fc21, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:54,811 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689790494811"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790494811"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790494811"}]},"ts":"1689790494811"} 2023-07-19 18:14:54,812 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 4804c008141c21a22bec55f72429fc21, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:54,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-19 18:14:54,995 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:54,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4804c008141c21a22bec55f72429fc21, NAME => 'testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:54,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:54,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:54,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:54,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:54,998 INFO [StoreOpener-4804c008141c21a22bec55f72429fc21-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,000 DEBUG [StoreOpener-4804c008141c21a22bec55f72429fc21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21/tr 2023-07-19 18:14:55,000 DEBUG [StoreOpener-4804c008141c21a22bec55f72429fc21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21/tr 2023-07-19 18:14:55,001 INFO [StoreOpener-4804c008141c21a22bec55f72429fc21-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4804c008141c21a22bec55f72429fc21 columnFamilyName tr 2023-07-19 18:14:55,001 INFO [StoreOpener-4804c008141c21a22bec55f72429fc21-1] regionserver.HStore(310): Store=4804c008141c21a22bec55f72429fc21/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:55,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:55,010 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4804c008141c21a22bec55f72429fc21; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9664955360, jitterRate=-0.0998808890581131}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:55,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4804c008141c21a22bec55f72429fc21: 2023-07-19 18:14:55,011 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689790494603.4804c008141c21a22bec55f72429fc21., pid=116, masterSystemTime=1689790494964 2023-07-19 18:14:55,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:55,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:55,013 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=4804c008141c21a22bec55f72429fc21, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:55,014 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689790495013"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790495013"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790495013"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790495013"}]},"ts":"1689790495013"} 2023-07-19 18:14:55,017 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-19 18:14:55,017 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 4804c008141c21a22bec55f72429fc21, server=jenkins-hbase4.apache.org,40615,1689790473552 in 204 msec 2023-07-19 18:14:55,018 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-19 18:14:55,018 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, ASSIGN in 360 msec 2023-07-19 18:14:55,019 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:14:55,019 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790495019"}]},"ts":"1689790495019"} 2023-07-19 18:14:55,020 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-19 18:14:55,022 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:14:55,024 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 419 msec 2023-07-19 18:14:55,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-19 18:14:55,211 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-19 18:14:55,211 DEBUG [Listener at localhost/46039] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-19 18:14:55,211 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:55,215 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-19 18:14:55,215 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:55,215 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-19 18:14:55,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-19 18:14:55,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 18:14:55,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:55,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:55,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:14:55,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-19 18:14:55,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region 4804c008141c21a22bec55f72429fc21 to RSGroup oldgroup 2023-07-19 18:14:55,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:14:55,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:14:55,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:14:55,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:14:55,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:14:55,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, REOPEN/MOVE 2023-07-19 18:14:55,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-19 18:14:55,225 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, REOPEN/MOVE 2023-07-19 18:14:55,226 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4804c008141c21a22bec55f72429fc21, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:55,226 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689790495226"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790495226"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790495226"}]},"ts":"1689790495226"} 2023-07-19 18:14:55,227 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 4804c008141c21a22bec55f72429fc21, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:55,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4804c008141c21a22bec55f72429fc21, disabling compactions & flushes 2023-07-19 18:14:55,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:55,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:55,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. after waiting 0 ms 2023-07-19 18:14:55,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:55,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:55,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:55,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4804c008141c21a22bec55f72429fc21: 2023-07-19 18:14:55,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4804c008141c21a22bec55f72429fc21 move to jenkins-hbase4.apache.org,38419,1689790478179 record at close sequenceid=2 2023-07-19 18:14:55,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,388 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4804c008141c21a22bec55f72429fc21, regionState=CLOSED 2023-07-19 18:14:55,389 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689790495388"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790495388"}]},"ts":"1689790495388"} 2023-07-19 18:14:55,391 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-19 18:14:55,391 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 4804c008141c21a22bec55f72429fc21, server=jenkins-hbase4.apache.org,40615,1689790473552 in 163 msec 2023-07-19 18:14:55,392 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38419,1689790478179; forceNewPlan=false, retain=false 2023-07-19 18:14:55,542 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 18:14:55,543 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4804c008141c21a22bec55f72429fc21, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:55,543 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689790495542"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790495542"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790495542"}]},"ts":"1689790495542"} 2023-07-19 18:14:55,545 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 4804c008141c21a22bec55f72429fc21, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:55,701 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:55,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4804c008141c21a22bec55f72429fc21, NAME => 'testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:55,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:55,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,703 INFO [StoreOpener-4804c008141c21a22bec55f72429fc21-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,704 DEBUG [StoreOpener-4804c008141c21a22bec55f72429fc21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21/tr 2023-07-19 18:14:55,704 DEBUG [StoreOpener-4804c008141c21a22bec55f72429fc21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21/tr 2023-07-19 18:14:55,704 INFO [StoreOpener-4804c008141c21a22bec55f72429fc21-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4804c008141c21a22bec55f72429fc21 columnFamilyName tr 2023-07-19 18:14:55,705 INFO [StoreOpener-4804c008141c21a22bec55f72429fc21-1] regionserver.HStore(310): Store=4804c008141c21a22bec55f72429fc21/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:55,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,707 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:55,711 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4804c008141c21a22bec55f72429fc21; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11948088640, jitterRate=0.11275246739387512}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:55,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4804c008141c21a22bec55f72429fc21: 2023-07-19 18:14:55,712 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689790494603.4804c008141c21a22bec55f72429fc21., pid=119, masterSystemTime=1689790495696 2023-07-19 18:14:55,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:55,714 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:55,714 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4804c008141c21a22bec55f72429fc21, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:55,715 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689790495714"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790495714"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790495714"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790495714"}]},"ts":"1689790495714"} 2023-07-19 18:14:55,718 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-19 18:14:55,719 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 4804c008141c21a22bec55f72429fc21, server=jenkins-hbase4.apache.org,38419,1689790478179 in 171 msec 2023-07-19 18:14:55,728 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, REOPEN/MOVE in 495 msec 2023-07-19 18:14:56,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-19 18:14:56,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-19 18:14:56,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:56,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:56,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:56,231 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:56,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-19 18:14:56,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:56,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-19 18:14:56,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:56,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-19 18:14:56,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:56,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:14:56,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:56,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-19 18:14:56,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 18:14:56,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 18:14:56,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:56,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:56,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 18:14:56,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:14:56,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:56,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:56,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40615] to rsgroup normal 2023-07-19 18:14:56,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 18:14:56,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 18:14:56,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:56,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:56,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 18:14:56,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 18:14:56,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40615,1689790473552] are moved back to default 2023-07-19 18:14:56,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-19 18:14:56,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:56,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:56,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:56,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-19 18:14:56,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:56,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:14:56,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-19 18:14:56,263 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:14:56,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-19 18:14:56,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-19 18:14:56,265 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 18:14:56,266 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 18:14:56,266 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:56,267 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:56,267 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 18:14:56,269 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:14:56,271 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:56,271 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade empty. 2023-07-19 18:14:56,272 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:56,272 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-19 18:14:56,288 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-19 18:14:56,289 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => d4a7e11c797a3cd910bfdb20bb1edade, NAME => 'unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:14:56,309 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:56,309 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing d4a7e11c797a3cd910bfdb20bb1edade, disabling compactions & flushes 2023-07-19 18:14:56,309 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:56,310 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:56,310 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. after waiting 0 ms 2023-07-19 18:14:56,310 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:56,310 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:56,310 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for d4a7e11c797a3cd910bfdb20bb1edade: 2023-07-19 18:14:56,312 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:14:56,313 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689790496313"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790496313"}]},"ts":"1689790496313"} 2023-07-19 18:14:56,315 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:14:56,315 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:14:56,315 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790496315"}]},"ts":"1689790496315"} 2023-07-19 18:14:56,317 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-19 18:14:56,320 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, ASSIGN}] 2023-07-19 18:14:56,321 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, ASSIGN 2023-07-19 18:14:56,322 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:14:56,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-19 18:14:56,466 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-19 18:14:56,474 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=d4a7e11c797a3cd910bfdb20bb1edade, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:56,474 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689790496474"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790496474"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790496474"}]},"ts":"1689790496474"} 2023-07-19 18:14:56,476 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure d4a7e11c797a3cd910bfdb20bb1edade, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:56,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-19 18:14:56,631 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:56,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d4a7e11c797a3cd910bfdb20bb1edade, NAME => 'unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:56,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:56,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:56,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:56,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:56,633 INFO [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:56,634 DEBUG [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade/ut 2023-07-19 18:14:56,634 DEBUG [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade/ut 2023-07-19 18:14:56,634 INFO [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d4a7e11c797a3cd910bfdb20bb1edade columnFamilyName ut 2023-07-19 18:14:56,635 INFO [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] regionserver.HStore(310): Store=d4a7e11c797a3cd910bfdb20bb1edade/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:56,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:56,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:56,638 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:56,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:14:56,641 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d4a7e11c797a3cd910bfdb20bb1edade; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10453098720, jitterRate=-0.026479318737983704}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:56,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d4a7e11c797a3cd910bfdb20bb1edade: 2023-07-19 18:14:56,641 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade., pid=122, masterSystemTime=1689790496627 2023-07-19 18:14:56,643 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:56,643 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:56,643 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=d4a7e11c797a3cd910bfdb20bb1edade, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:56,643 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689790496643"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790496643"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790496643"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790496643"}]},"ts":"1689790496643"} 2023-07-19 18:14:56,646 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-19 18:14:56,646 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure d4a7e11c797a3cd910bfdb20bb1edade, server=jenkins-hbase4.apache.org,43775,1689790473982 in 168 msec 2023-07-19 18:14:56,647 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-19 18:14:56,647 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, ASSIGN in 326 msec 2023-07-19 18:14:56,648 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:14:56,648 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790496648"}]},"ts":"1689790496648"} 2023-07-19 18:14:56,649 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-19 18:14:56,653 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:14:56,654 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 393 msec 2023-07-19 18:14:56,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-19 18:14:56,868 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-19 18:14:56,868 DEBUG [Listener at localhost/46039] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-19 18:14:56,868 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:56,872 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-19 18:14:56,873 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:56,873 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-19 18:14:56,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-19 18:14:56,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 18:14:56,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 18:14:56,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:56,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:56,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 18:14:56,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-19 18:14:56,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region d4a7e11c797a3cd910bfdb20bb1edade to RSGroup normal 2023-07-19 18:14:56,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, REOPEN/MOVE 2023-07-19 18:14:56,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-19 18:14:56,881 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, REOPEN/MOVE 2023-07-19 18:14:56,881 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d4a7e11c797a3cd910bfdb20bb1edade, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:56,882 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689790496881"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790496881"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790496881"}]},"ts":"1689790496881"} 2023-07-19 18:14:56,883 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure d4a7e11c797a3cd910bfdb20bb1edade, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:57,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:57,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d4a7e11c797a3cd910bfdb20bb1edade, disabling compactions & flushes 2023-07-19 18:14:57,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:57,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:57,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. after waiting 0 ms 2023-07-19 18:14:57,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:57,041 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:14:57,041 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:57,041 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d4a7e11c797a3cd910bfdb20bb1edade: 2023-07-19 18:14:57,042 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d4a7e11c797a3cd910bfdb20bb1edade move to jenkins-hbase4.apache.org,40615,1689790473552 record at close sequenceid=2 2023-07-19 18:14:57,043 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:57,043 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d4a7e11c797a3cd910bfdb20bb1edade, regionState=CLOSED 2023-07-19 18:14:57,043 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689790497043"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790497043"}]},"ts":"1689790497043"} 2023-07-19 18:14:57,046 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-19 18:14:57,046 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure d4a7e11c797a3cd910bfdb20bb1edade, server=jenkins-hbase4.apache.org,43775,1689790473982 in 162 msec 2023-07-19 18:14:57,046 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40615,1689790473552; forceNewPlan=false, retain=false 2023-07-19 18:14:57,197 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d4a7e11c797a3cd910bfdb20bb1edade, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:57,197 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689790497197"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790497197"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790497197"}]},"ts":"1689790497197"} 2023-07-19 18:14:57,199 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure d4a7e11c797a3cd910bfdb20bb1edade, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:57,355 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:57,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d4a7e11c797a3cd910bfdb20bb1edade, NAME => 'unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:57,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:57,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:57,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:57,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:57,358 INFO [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:57,359 DEBUG [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade/ut 2023-07-19 18:14:57,359 DEBUG [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade/ut 2023-07-19 18:14:57,359 INFO [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d4a7e11c797a3cd910bfdb20bb1edade columnFamilyName ut 2023-07-19 18:14:57,360 INFO [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] regionserver.HStore(310): Store=d4a7e11c797a3cd910bfdb20bb1edade/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:57,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:57,362 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:57,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:57,368 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d4a7e11c797a3cd910bfdb20bb1edade; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11414755200, jitterRate=0.06308192014694214}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:57,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d4a7e11c797a3cd910bfdb20bb1edade: 2023-07-19 18:14:57,369 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade., pid=125, masterSystemTime=1689790497351 2023-07-19 18:14:57,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:57,371 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:57,371 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d4a7e11c797a3cd910bfdb20bb1edade, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:57,371 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689790497371"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790497371"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790497371"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790497371"}]},"ts":"1689790497371"} 2023-07-19 18:14:57,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-19 18:14:57,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure d4a7e11c797a3cd910bfdb20bb1edade, server=jenkins-hbase4.apache.org,40615,1689790473552 in 175 msec 2023-07-19 18:14:57,378 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, REOPEN/MOVE in 497 msec 2023-07-19 18:14:57,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-19 18:14:57,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-19 18:14:57,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:57,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:57,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:57,888 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:14:57,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-19 18:14:57,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:57,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-19 18:14:57,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:57,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-19 18:14:57,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:57,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-19 18:14:57,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 18:14:57,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:57,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:57,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 18:14:57,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-19 18:14:57,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-19 18:14:57,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:57,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:57,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-19 18:14:57,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:14:57,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-19 18:14:57,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:57,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-19 18:14:57,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:14:57,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:14:57,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:14:57,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-19 18:14:57,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 18:14:57,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:57,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:57,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 18:14:57,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 18:14:57,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-19 18:14:57,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region d4a7e11c797a3cd910bfdb20bb1edade to RSGroup default 2023-07-19 18:14:57,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, REOPEN/MOVE 2023-07-19 18:14:57,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-19 18:14:57,927 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, REOPEN/MOVE 2023-07-19 18:14:57,928 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=d4a7e11c797a3cd910bfdb20bb1edade, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:57,928 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689790497928"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790497928"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790497928"}]},"ts":"1689790497928"} 2023-07-19 18:14:57,929 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure d4a7e11c797a3cd910bfdb20bb1edade, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:58,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:58,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d4a7e11c797a3cd910bfdb20bb1edade, disabling compactions & flushes 2023-07-19 18:14:58,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:58,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:58,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. after waiting 0 ms 2023-07-19 18:14:58,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:58,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 18:14:58,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:58,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d4a7e11c797a3cd910bfdb20bb1edade: 2023-07-19 18:14:58,090 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d4a7e11c797a3cd910bfdb20bb1edade move to jenkins-hbase4.apache.org,43775,1689790473982 record at close sequenceid=5 2023-07-19 18:14:58,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:58,091 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=d4a7e11c797a3cd910bfdb20bb1edade, regionState=CLOSED 2023-07-19 18:14:58,092 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689790498091"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790498091"}]},"ts":"1689790498091"} 2023-07-19 18:14:58,094 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-19 18:14:58,094 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure d4a7e11c797a3cd910bfdb20bb1edade, server=jenkins-hbase4.apache.org,40615,1689790473552 in 164 msec 2023-07-19 18:14:58,095 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:14:58,245 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=d4a7e11c797a3cd910bfdb20bb1edade, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:58,246 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689790498245"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790498245"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790498245"}]},"ts":"1689790498245"} 2023-07-19 18:14:58,247 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure d4a7e11c797a3cd910bfdb20bb1edade, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:14:58,363 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 18:14:58,403 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:58,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d4a7e11c797a3cd910bfdb20bb1edade, NAME => 'unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:58,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:58,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:58,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:58,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:58,405 INFO [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:58,406 DEBUG [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade/ut 2023-07-19 18:14:58,406 DEBUG [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade/ut 2023-07-19 18:14:58,407 INFO [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d4a7e11c797a3cd910bfdb20bb1edade columnFamilyName ut 2023-07-19 18:14:58,407 INFO [StoreOpener-d4a7e11c797a3cd910bfdb20bb1edade-1] regionserver.HStore(310): Store=d4a7e11c797a3cd910bfdb20bb1edade/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:58,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:58,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:58,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:14:58,413 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d4a7e11c797a3cd910bfdb20bb1edade; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11809045280, jitterRate=0.09980304539203644}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:58,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d4a7e11c797a3cd910bfdb20bb1edade: 2023-07-19 18:14:58,414 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade., pid=128, masterSystemTime=1689790498399 2023-07-19 18:14:58,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:58,415 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:14:58,416 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=d4a7e11c797a3cd910bfdb20bb1edade, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:14:58,416 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689790498416"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790498416"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790498416"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790498416"}]},"ts":"1689790498416"} 2023-07-19 18:14:58,419 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-19 18:14:58,419 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure d4a7e11c797a3cd910bfdb20bb1edade, server=jenkins-hbase4.apache.org,43775,1689790473982 in 170 msec 2023-07-19 18:14:58,420 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=d4a7e11c797a3cd910bfdb20bb1edade, REOPEN/MOVE in 493 msec 2023-07-19 18:14:58,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-19 18:14:58,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-19 18:14:58,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:58,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40615] to rsgroup default 2023-07-19 18:14:58,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 18:14:58,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:58,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:58,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 18:14:58,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 18:14:58,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-19 18:14:58,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40615,1689790473552] are moved back to normal 2023-07-19 18:14:58,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-19 18:14:58,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:58,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-19 18:14:58,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:58,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:58,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 18:14:58,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-19 18:14:58,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:58,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:14:58,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:14:58,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:58,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:14:58,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:58,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:14:58,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:58,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 18:14:58,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 18:14:58,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:58,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-19 18:14:58,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:58,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 18:14:58,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:58,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-19 18:14:58,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(345): Moving region 4804c008141c21a22bec55f72429fc21 to RSGroup default 2023-07-19 18:14:58,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, REOPEN/MOVE 2023-07-19 18:14:58,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-19 18:14:58,975 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, REOPEN/MOVE 2023-07-19 18:14:58,976 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=4804c008141c21a22bec55f72429fc21, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:14:58,976 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689790498975"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790498975"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790498975"}]},"ts":"1689790498975"} 2023-07-19 18:14:58,977 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 4804c008141c21a22bec55f72429fc21, server=jenkins-hbase4.apache.org,38419,1689790478179}] 2023-07-19 18:14:59,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:59,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4804c008141c21a22bec55f72429fc21, disabling compactions & flushes 2023-07-19 18:14:59,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:59,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:59,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. after waiting 0 ms 2023-07-19 18:14:59,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:59,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 18:14:59,137 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:59,137 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4804c008141c21a22bec55f72429fc21: 2023-07-19 18:14:59,137 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4804c008141c21a22bec55f72429fc21 move to jenkins-hbase4.apache.org,40615,1689790473552 record at close sequenceid=5 2023-07-19 18:14:59,138 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:59,139 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=4804c008141c21a22bec55f72429fc21, regionState=CLOSED 2023-07-19 18:14:59,139 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689790499139"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790499139"}]},"ts":"1689790499139"} 2023-07-19 18:14:59,143 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-19 18:14:59,143 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 4804c008141c21a22bec55f72429fc21, server=jenkins-hbase4.apache.org,38419,1689790478179 in 164 msec 2023-07-19 18:14:59,144 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40615,1689790473552; forceNewPlan=false, retain=false 2023-07-19 18:14:59,294 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 18:14:59,294 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=4804c008141c21a22bec55f72429fc21, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:59,295 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689790499294"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790499294"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790499294"}]},"ts":"1689790499294"} 2023-07-19 18:14:59,296 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 4804c008141c21a22bec55f72429fc21, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:14:59,452 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:59,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4804c008141c21a22bec55f72429fc21, NAME => 'testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:14:59,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:59,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:14:59,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:59,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:59,454 INFO [StoreOpener-4804c008141c21a22bec55f72429fc21-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:59,455 DEBUG [StoreOpener-4804c008141c21a22bec55f72429fc21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21/tr 2023-07-19 18:14:59,455 DEBUG [StoreOpener-4804c008141c21a22bec55f72429fc21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21/tr 2023-07-19 18:14:59,456 INFO [StoreOpener-4804c008141c21a22bec55f72429fc21-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4804c008141c21a22bec55f72429fc21 columnFamilyName tr 2023-07-19 18:14:59,456 INFO [StoreOpener-4804c008141c21a22bec55f72429fc21-1] regionserver.HStore(310): Store=4804c008141c21a22bec55f72429fc21/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:14:59,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:59,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:59,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4804c008141c21a22bec55f72429fc21 2023-07-19 18:14:59,463 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4804c008141c21a22bec55f72429fc21; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10869187040, jitterRate=0.012271925806999207}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:14:59,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4804c008141c21a22bec55f72429fc21: 2023-07-19 18:14:59,464 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689790494603.4804c008141c21a22bec55f72429fc21., pid=131, masterSystemTime=1689790499448 2023-07-19 18:14:59,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:59,465 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:14:59,466 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=4804c008141c21a22bec55f72429fc21, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:14:59,466 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689790499465"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790499465"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790499465"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790499465"}]},"ts":"1689790499465"} 2023-07-19 18:14:59,468 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-19 18:14:59,468 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 4804c008141c21a22bec55f72429fc21, server=jenkins-hbase4.apache.org,40615,1689790473552 in 171 msec 2023-07-19 18:14:59,470 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=4804c008141c21a22bec55f72429fc21, REOPEN/MOVE in 494 msec 2023-07-19 18:14:59,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-19 18:14:59,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-19 18:14:59,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:14:59,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:38251] to rsgroup default 2023-07-19 18:14:59,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:59,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 18:14:59,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:14:59,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-19 18:14:59,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799, jenkins-hbase4.apache.org,38419,1689790478179] are moved back to newgroup 2023-07-19 18:14:59,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-19 18:14:59,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:14:59,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-19 18:14:59,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:59,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:14:59,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:14:59,989 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:14:59,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:14:59,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:14:59,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:14:59,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:00,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:00,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:15:00,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:00,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 762 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791700005, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:15:00,006 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:00,007 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:00,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,008 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:00,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:00,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:00,027 INFO [Listener at localhost/46039] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=510 (was 513), OpenFileDescriptor=777 (was 788), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=474 (was 463) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=2623 (was 2642) 2023-07-19 18:15:00,028 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-19 18:15:00,047 INFO [Listener at localhost/46039] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=510, OpenFileDescriptor=777, MaxFileDescriptor=60000, SystemLoadAverage=474, ProcessCount=173, AvailableMemoryMB=2623 2023-07-19 18:15:00,048 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-19 18:15:00,048 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-19 18:15:00,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:00,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:00,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:00,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:00,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:00,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:00,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:00,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:00,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:00,064 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:00,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:00,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:00,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:00,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:00,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:00,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:15:00,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:00,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 790 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791700075, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:15:00,076 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:00,078 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:00,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,079 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:00,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:00,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:00,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-19 18:15:00,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:15:00,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-19 18:15:00,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-19 18:15:00,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-19 18:15:00,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:00,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-19 18:15:00,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:00,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:51588 deadline: 1689791700089, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-19 18:15:00,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-19 18:15:00,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:00,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:51588 deadline: 1689791700092, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-19 18:15:00,095 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-19 18:15:00,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-19 18:15:00,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-19 18:15:00,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:00,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 809 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:51588 deadline: 1689791700099, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-19 18:15:00,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:00,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:00,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:00,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:00,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:00,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:00,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:00,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:00,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:00,115 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:00,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:00,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:00,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:00,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:00,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:00,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:15:00,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:00,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 833 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791700126, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:15:00,129 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:00,130 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:00,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,131 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:00,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:00,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:00,148 INFO [Listener at localhost/46039] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=514 (was 510) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-27 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3f2b7d7-shared-pool-28 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=777 (was 777), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=474 (was 474), ProcessCount=173 (was 173), AvailableMemoryMB=2622 (was 2623) 2023-07-19 18:15:00,149 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-19 18:15:00,167 INFO [Listener at localhost/46039] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=514, OpenFileDescriptor=777, MaxFileDescriptor=60000, SystemLoadAverage=474, ProcessCount=173, AvailableMemoryMB=2622 2023-07-19 18:15:00,167 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-19 18:15:00,168 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-19 18:15:00,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:00,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:00,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:00,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:00,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:00,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:00,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:00,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:00,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:00,183 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:00,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:00,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:00,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:00,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:00,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:00,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:15:00,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:00,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 861 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791700193, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:15:00,194 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:00,195 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:00,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,196 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:00,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:00,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:00,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:00,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:00,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1704307594 2023-07-19 18:15:00,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1704307594 2023-07-19 18:15:00,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:00,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:00,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:15:00,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:00,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:38251] to rsgroup Group_testDisabledTableMove_1704307594 2023-07-19 18:15:00,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1704307594 2023-07-19 18:15:00,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:00,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:00,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:15:00,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 18:15:00,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799, jenkins-hbase4.apache.org,38419,1689790478179] are moved back to default 2023-07-19 18:15:00,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1704307594 2023-07-19 18:15:00,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:00,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:00,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:00,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1704307594 2023-07-19 18:15:00,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:00,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:00,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-19 18:15:00,232 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:15:00,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-19 18:15:00,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-19 18:15:00,234 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1704307594 2023-07-19 18:15:00,234 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:00,235 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:00,235 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:15:00,237 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:15:00,242 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:00,242 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:00,242 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:00,242 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:00,242 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:00,245 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a empty. 2023-07-19 18:15:00,245 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9 empty. 2023-07-19 18:15:00,245 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae empty. 2023-07-19 18:15:00,245 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4 empty. 2023-07-19 18:15:00,245 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853 empty. 2023-07-19 18:15:00,245 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:00,245 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:00,245 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:00,245 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:00,246 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:00,246 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-19 18:15:00,265 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:00,271 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 1e23ca253c36440481cd661f169af0a9, NAME => 'Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:15:00,271 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 49bd4495311e987e0e59ab3f393ad6ae, NAME => 'Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:15:00,271 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => a837b98ff08731d5d582a0ca25ffec8a, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:15:00,311 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:00,311 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 49bd4495311e987e0e59ab3f393ad6ae, disabling compactions & flushes 2023-07-19 18:15:00,311 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. 2023-07-19 18:15:00,311 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. 2023-07-19 18:15:00,311 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. after waiting 0 ms 2023-07-19 18:15:00,312 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. 2023-07-19 18:15:00,312 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. 2023-07-19 18:15:00,312 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 49bd4495311e987e0e59ab3f393ad6ae: 2023-07-19 18:15:00,312 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 547bae4eb02fe957994c9e66fd3c75c4, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:15:00,321 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:00,322 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing a837b98ff08731d5d582a0ca25ffec8a, disabling compactions & flushes 2023-07-19 18:15:00,322 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. 2023-07-19 18:15:00,322 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. 2023-07-19 18:15:00,322 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. after waiting 0 ms 2023-07-19 18:15:00,322 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. 2023-07-19 18:15:00,322 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. 2023-07-19 18:15:00,322 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for a837b98ff08731d5d582a0ca25ffec8a: 2023-07-19 18:15:00,322 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 75137af7a1516b89d98f767bed5a7853, NAME => 'Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp 2023-07-19 18:15:00,323 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:00,323 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 1e23ca253c36440481cd661f169af0a9, disabling compactions & flushes 2023-07-19 18:15:00,323 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. 2023-07-19 18:15:00,323 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. 2023-07-19 18:15:00,323 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. after waiting 0 ms 2023-07-19 18:15:00,324 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. 2023-07-19 18:15:00,324 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. 2023-07-19 18:15:00,324 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 1e23ca253c36440481cd661f169af0a9: 2023-07-19 18:15:00,334 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:00,334 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 547bae4eb02fe957994c9e66fd3c75c4, disabling compactions & flushes 2023-07-19 18:15:00,334 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. 2023-07-19 18:15:00,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-19 18:15:00,334 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. 2023-07-19 18:15:00,334 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. after waiting 0 ms 2023-07-19 18:15:00,335 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. 2023-07-19 18:15:00,335 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. 2023-07-19 18:15:00,335 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 547bae4eb02fe957994c9e66fd3c75c4: 2023-07-19 18:15:00,341 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-19 18:15:00,344 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:00,344 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 75137af7a1516b89d98f767bed5a7853, disabling compactions & flushes 2023-07-19 18:15:00,344 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. 2023-07-19 18:15:00,344 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. 2023-07-19 18:15:00,344 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. after waiting 0 ms 2023-07-19 18:15:00,344 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. 2023-07-19 18:15:00,344 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. 2023-07-19 18:15:00,344 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 75137af7a1516b89d98f767bed5a7853: 2023-07-19 18:15:00,347 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:15:00,351 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689790500351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790500351"}]},"ts":"1689790500351"} 2023-07-19 18:15:00,351 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790500351"}]},"ts":"1689790500351"} 2023-07-19 18:15:00,351 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790500351"}]},"ts":"1689790500351"} 2023-07-19 18:15:00,352 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790500351"}]},"ts":"1689790500351"} 2023-07-19 18:15:00,352 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689790500351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790500351"}]},"ts":"1689790500351"} 2023-07-19 18:15:00,354 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-19 18:15:00,355 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:15:00,355 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790500355"}]},"ts":"1689790500355"} 2023-07-19 18:15:00,357 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-19 18:15:00,361 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:15:00,361 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:15:00,361 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:15:00,361 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:15:00,362 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49bd4495311e987e0e59ab3f393ad6ae, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e23ca253c36440481cd661f169af0a9, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a837b98ff08731d5d582a0ca25ffec8a, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=547bae4eb02fe957994c9e66fd3c75c4, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=75137af7a1516b89d98f767bed5a7853, ASSIGN}] 2023-07-19 18:15:00,364 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=547bae4eb02fe957994c9e66fd3c75c4, ASSIGN 2023-07-19 18:15:00,364 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a837b98ff08731d5d582a0ca25ffec8a, ASSIGN 2023-07-19 18:15:00,364 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e23ca253c36440481cd661f169af0a9, ASSIGN 2023-07-19 18:15:00,364 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49bd4495311e987e0e59ab3f393ad6ae, ASSIGN 2023-07-19 18:15:00,365 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=547bae4eb02fe957994c9e66fd3c75c4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:15:00,365 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=75137af7a1516b89d98f767bed5a7853, ASSIGN 2023-07-19 18:15:00,365 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a837b98ff08731d5d582a0ca25ffec8a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43775,1689790473982; forceNewPlan=false, retain=false 2023-07-19 18:15:00,365 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e23ca253c36440481cd661f169af0a9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40615,1689790473552; forceNewPlan=false, retain=false 2023-07-19 18:15:00,366 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49bd4495311e987e0e59ab3f393ad6ae, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40615,1689790473552; forceNewPlan=false, retain=false 2023-07-19 18:15:00,366 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=75137af7a1516b89d98f767bed5a7853, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40615,1689790473552; forceNewPlan=false, retain=false 2023-07-19 18:15:00,515 INFO [jenkins-hbase4:46739] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 18:15:00,519 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=75137af7a1516b89d98f767bed5a7853, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:00,519 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=a837b98ff08731d5d582a0ca25ffec8a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:15:00,519 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689790500519"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790500519"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790500519"}]},"ts":"1689790500519"} 2023-07-19 18:15:00,519 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=547bae4eb02fe957994c9e66fd3c75c4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:15:00,519 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=49bd4495311e987e0e59ab3f393ad6ae, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:00,519 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=1e23ca253c36440481cd661f169af0a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:00,520 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689790500519"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790500519"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790500519"}]},"ts":"1689790500519"} 2023-07-19 18:15:00,520 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500519"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790500519"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790500519"}]},"ts":"1689790500519"} 2023-07-19 18:15:00,520 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500519"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790500519"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790500519"}]},"ts":"1689790500519"} 2023-07-19 18:15:00,519 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500519"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790500519"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790500519"}]},"ts":"1689790500519"} 2023-07-19 18:15:00,521 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=137, state=RUNNABLE; OpenRegionProcedure 75137af7a1516b89d98f767bed5a7853, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:15:00,521 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=133, state=RUNNABLE; OpenRegionProcedure 49bd4495311e987e0e59ab3f393ad6ae, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:15:00,522 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=134, state=RUNNABLE; OpenRegionProcedure 1e23ca253c36440481cd661f169af0a9, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:15:00,523 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=136, state=RUNNABLE; OpenRegionProcedure 547bae4eb02fe957994c9e66fd3c75c4, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:15:00,526 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=135, state=RUNNABLE; OpenRegionProcedure a837b98ff08731d5d582a0ca25ffec8a, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:15:00,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-19 18:15:00,683 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. 2023-07-19 18:15:00,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 49bd4495311e987e0e59ab3f393ad6ae, NAME => 'Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 18:15:00,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:00,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:00,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:00,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:00,699 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. 2023-07-19 18:15:00,699 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a837b98ff08731d5d582a0ca25ffec8a, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 18:15:00,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:00,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:00,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:00,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:00,701 INFO [StoreOpener-49bd4495311e987e0e59ab3f393ad6ae-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:00,701 INFO [StoreOpener-a837b98ff08731d5d582a0ca25ffec8a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:00,702 DEBUG [StoreOpener-49bd4495311e987e0e59ab3f393ad6ae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae/f 2023-07-19 18:15:00,702 DEBUG [StoreOpener-49bd4495311e987e0e59ab3f393ad6ae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae/f 2023-07-19 18:15:00,703 DEBUG [StoreOpener-a837b98ff08731d5d582a0ca25ffec8a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a/f 2023-07-19 18:15:00,703 DEBUG [StoreOpener-a837b98ff08731d5d582a0ca25ffec8a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a/f 2023-07-19 18:15:00,703 INFO [StoreOpener-49bd4495311e987e0e59ab3f393ad6ae-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 49bd4495311e987e0e59ab3f393ad6ae columnFamilyName f 2023-07-19 18:15:00,703 INFO [StoreOpener-a837b98ff08731d5d582a0ca25ffec8a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a837b98ff08731d5d582a0ca25ffec8a columnFamilyName f 2023-07-19 18:15:00,703 INFO [StoreOpener-49bd4495311e987e0e59ab3f393ad6ae-1] regionserver.HStore(310): Store=49bd4495311e987e0e59ab3f393ad6ae/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:00,704 INFO [StoreOpener-a837b98ff08731d5d582a0ca25ffec8a-1] regionserver.HStore(310): Store=a837b98ff08731d5d582a0ca25ffec8a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:00,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:00,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:00,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:00,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:00,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:00,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:00,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:00,715 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a837b98ff08731d5d582a0ca25ffec8a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9516640000, jitterRate=-0.11369383335113525}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:00,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a837b98ff08731d5d582a0ca25ffec8a: 2023-07-19 18:15:00,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:00,717 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 49bd4495311e987e0e59ab3f393ad6ae; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11299992000, jitterRate=0.05239376425743103}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:00,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 49bd4495311e987e0e59ab3f393ad6ae: 2023-07-19 18:15:00,718 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae., pid=139, masterSystemTime=1689790500673 2023-07-19 18:15:00,719 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a., pid=142, masterSystemTime=1689790500675 2023-07-19 18:15:00,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. 2023-07-19 18:15:00,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. 2023-07-19 18:15:00,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. 2023-07-19 18:15:00,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1e23ca253c36440481cd661f169af0a9, NAME => 'Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 18:15:00,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:00,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:00,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:00,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:00,722 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=49bd4495311e987e0e59ab3f393ad6ae, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:00,722 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689790500722"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790500722"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790500722"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790500722"}]},"ts":"1689790500722"} 2023-07-19 18:15:00,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. 2023-07-19 18:15:00,723 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. 2023-07-19 18:15:00,723 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. 2023-07-19 18:15:00,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 547bae4eb02fe957994c9e66fd3c75c4, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 18:15:00,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:00,724 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=a837b98ff08731d5d582a0ca25ffec8a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:15:00,724 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500724"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790500724"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790500724"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790500724"}]},"ts":"1689790500724"} 2023-07-19 18:15:00,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:00,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:00,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:00,726 INFO [StoreOpener-1e23ca253c36440481cd661f169af0a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:00,730 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=133 2023-07-19 18:15:00,730 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=133, state=SUCCESS; OpenRegionProcedure 49bd4495311e987e0e59ab3f393ad6ae, server=jenkins-hbase4.apache.org,40615,1689790473552 in 203 msec 2023-07-19 18:15:00,730 INFO [StoreOpener-547bae4eb02fe957994c9e66fd3c75c4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:00,732 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=135 2023-07-19 18:15:00,732 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=135, state=SUCCESS; OpenRegionProcedure a837b98ff08731d5d582a0ca25ffec8a, server=jenkins-hbase4.apache.org,43775,1689790473982 in 200 msec 2023-07-19 18:15:00,732 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49bd4495311e987e0e59ab3f393ad6ae, ASSIGN in 368 msec 2023-07-19 18:15:00,733 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a837b98ff08731d5d582a0ca25ffec8a, ASSIGN in 370 msec 2023-07-19 18:15:00,735 DEBUG [StoreOpener-1e23ca253c36440481cd661f169af0a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9/f 2023-07-19 18:15:00,735 DEBUG [StoreOpener-1e23ca253c36440481cd661f169af0a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9/f 2023-07-19 18:15:00,735 INFO [StoreOpener-1e23ca253c36440481cd661f169af0a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1e23ca253c36440481cd661f169af0a9 columnFamilyName f 2023-07-19 18:15:00,736 INFO [StoreOpener-1e23ca253c36440481cd661f169af0a9-1] regionserver.HStore(310): Store=1e23ca253c36440481cd661f169af0a9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:00,737 DEBUG [StoreOpener-547bae4eb02fe957994c9e66fd3c75c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4/f 2023-07-19 18:15:00,737 DEBUG [StoreOpener-547bae4eb02fe957994c9e66fd3c75c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4/f 2023-07-19 18:15:00,737 INFO [StoreOpener-547bae4eb02fe957994c9e66fd3c75c4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 547bae4eb02fe957994c9e66fd3c75c4 columnFamilyName f 2023-07-19 18:15:00,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:00,738 INFO [StoreOpener-547bae4eb02fe957994c9e66fd3c75c4-1] regionserver.HStore(310): Store=547bae4eb02fe957994c9e66fd3c75c4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:00,738 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:00,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:00,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:00,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:00,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:00,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:00,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:00,758 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 547bae4eb02fe957994c9e66fd3c75c4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11850845120, jitterRate=0.10369595885276794}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:00,758 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 547bae4eb02fe957994c9e66fd3c75c4: 2023-07-19 18:15:00,758 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1e23ca253c36440481cd661f169af0a9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11091440480, jitterRate=0.032970890402793884}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:00,758 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1e23ca253c36440481cd661f169af0a9: 2023-07-19 18:15:00,759 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4., pid=141, masterSystemTime=1689790500675 2023-07-19 18:15:00,759 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9., pid=140, masterSystemTime=1689790500673 2023-07-19 18:15:00,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. 2023-07-19 18:15:00,761 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. 2023-07-19 18:15:00,761 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=547bae4eb02fe957994c9e66fd3c75c4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:15:00,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. 2023-07-19 18:15:00,762 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. 2023-07-19 18:15:00,762 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500761"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790500761"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790500761"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790500761"}]},"ts":"1689790500761"} 2023-07-19 18:15:00,762 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. 2023-07-19 18:15:00,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 75137af7a1516b89d98f767bed5a7853, NAME => 'Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 18:15:00,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:00,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:00,763 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:00,763 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:00,764 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=1e23ca253c36440481cd661f169af0a9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:00,764 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500764"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790500764"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790500764"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790500764"}]},"ts":"1689790500764"} 2023-07-19 18:15:00,765 INFO [StoreOpener-75137af7a1516b89d98f767bed5a7853-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:00,766 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=136 2023-07-19 18:15:00,766 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=136, state=SUCCESS; OpenRegionProcedure 547bae4eb02fe957994c9e66fd3c75c4, server=jenkins-hbase4.apache.org,43775,1689790473982 in 241 msec 2023-07-19 18:15:00,768 DEBUG [StoreOpener-75137af7a1516b89d98f767bed5a7853-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853/f 2023-07-19 18:15:00,768 DEBUG [StoreOpener-75137af7a1516b89d98f767bed5a7853-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853/f 2023-07-19 18:15:00,768 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=547bae4eb02fe957994c9e66fd3c75c4, ASSIGN in 404 msec 2023-07-19 18:15:00,768 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=134 2023-07-19 18:15:00,768 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=134, state=SUCCESS; OpenRegionProcedure 1e23ca253c36440481cd661f169af0a9, server=jenkins-hbase4.apache.org,40615,1689790473552 in 244 msec 2023-07-19 18:15:00,768 INFO [StoreOpener-75137af7a1516b89d98f767bed5a7853-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 75137af7a1516b89d98f767bed5a7853 columnFamilyName f 2023-07-19 18:15:00,769 INFO [StoreOpener-75137af7a1516b89d98f767bed5a7853-1] regionserver.HStore(310): Store=75137af7a1516b89d98f767bed5a7853/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:00,770 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e23ca253c36440481cd661f169af0a9, ASSIGN in 406 msec 2023-07-19 18:15:00,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:00,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:00,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:00,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:00,786 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 75137af7a1516b89d98f767bed5a7853; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10988350080, jitterRate=0.02336984872817993}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:00,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 75137af7a1516b89d98f767bed5a7853: 2023-07-19 18:15:00,787 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853., pid=138, masterSystemTime=1689790500673 2023-07-19 18:15:00,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. 2023-07-19 18:15:00,789 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. 2023-07-19 18:15:00,789 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=75137af7a1516b89d98f767bed5a7853, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:00,790 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689790500789"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790500789"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790500789"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790500789"}]},"ts":"1689790500789"} 2023-07-19 18:15:00,793 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=137 2023-07-19 18:15:00,793 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=137, state=SUCCESS; OpenRegionProcedure 75137af7a1516b89d98f767bed5a7853, server=jenkins-hbase4.apache.org,40615,1689790473552 in 270 msec 2023-07-19 18:15:00,794 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=132 2023-07-19 18:15:00,795 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=75137af7a1516b89d98f767bed5a7853, ASSIGN in 431 msec 2023-07-19 18:15:00,795 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:15:00,795 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790500795"}]},"ts":"1689790500795"} 2023-07-19 18:15:00,797 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-19 18:15:00,800 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:15:00,802 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 571 msec 2023-07-19 18:15:00,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-19 18:15:00,838 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-19 18:15:00,838 DEBUG [Listener at localhost/46039] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-19 18:15:00,838 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:00,843 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-19 18:15:00,843 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:00,843 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-19 18:15:00,844 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:00,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-19 18:15:00,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:15:00,853 INFO [Listener at localhost/46039] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-19 18:15:00,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-19 18:15:00,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-19 18:15:00,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-19 18:15:00,857 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790500857"}]},"ts":"1689790500857"} 2023-07-19 18:15:00,859 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-19 18:15:00,860 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-19 18:15:00,861 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49bd4495311e987e0e59ab3f393ad6ae, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e23ca253c36440481cd661f169af0a9, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a837b98ff08731d5d582a0ca25ffec8a, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=547bae4eb02fe957994c9e66fd3c75c4, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=75137af7a1516b89d98f767bed5a7853, UNASSIGN}] 2023-07-19 18:15:00,863 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49bd4495311e987e0e59ab3f393ad6ae, UNASSIGN 2023-07-19 18:15:00,863 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a837b98ff08731d5d582a0ca25ffec8a, UNASSIGN 2023-07-19 18:15:00,863 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e23ca253c36440481cd661f169af0a9, UNASSIGN 2023-07-19 18:15:00,863 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=547bae4eb02fe957994c9e66fd3c75c4, UNASSIGN 2023-07-19 18:15:00,864 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=75137af7a1516b89d98f767bed5a7853, UNASSIGN 2023-07-19 18:15:00,864 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=49bd4495311e987e0e59ab3f393ad6ae, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:00,864 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=a837b98ff08731d5d582a0ca25ffec8a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:15:00,864 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689790500864"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790500864"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790500864"}]},"ts":"1689790500864"} 2023-07-19 18:15:00,864 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500864"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790500864"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790500864"}]},"ts":"1689790500864"} 2023-07-19 18:15:00,864 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=1e23ca253c36440481cd661f169af0a9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:00,864 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=547bae4eb02fe957994c9e66fd3c75c4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:15:00,864 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500864"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790500864"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790500864"}]},"ts":"1689790500864"} 2023-07-19 18:15:00,864 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790500864"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790500864"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790500864"}]},"ts":"1689790500864"} 2023-07-19 18:15:00,865 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=75137af7a1516b89d98f767bed5a7853, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:00,865 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689790500865"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790500865"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790500865"}]},"ts":"1689790500865"} 2023-07-19 18:15:00,865 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=144, state=RUNNABLE; CloseRegionProcedure 49bd4495311e987e0e59ab3f393ad6ae, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:15:00,866 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=146, state=RUNNABLE; CloseRegionProcedure a837b98ff08731d5d582a0ca25ffec8a, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:15:00,866 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=145, state=RUNNABLE; CloseRegionProcedure 1e23ca253c36440481cd661f169af0a9, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:15:00,867 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=147, state=RUNNABLE; CloseRegionProcedure 547bae4eb02fe957994c9e66fd3c75c4, server=jenkins-hbase4.apache.org,43775,1689790473982}] 2023-07-19 18:15:00,868 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=148, state=RUNNABLE; CloseRegionProcedure 75137af7a1516b89d98f767bed5a7853, server=jenkins-hbase4.apache.org,40615,1689790473552}] 2023-07-19 18:15:00,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-19 18:15:01,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:01,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 75137af7a1516b89d98f767bed5a7853, disabling compactions & flushes 2023-07-19 18:15:01,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. 2023-07-19 18:15:01,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. 2023-07-19 18:15:01,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. after waiting 0 ms 2023-07-19 18:15:01,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. 2023-07-19 18:15:01,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:01,021 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 547bae4eb02fe957994c9e66fd3c75c4, disabling compactions & flushes 2023-07-19 18:15:01,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. 2023-07-19 18:15:01,021 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. 2023-07-19 18:15:01,021 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. after waiting 0 ms 2023-07-19 18:15:01,021 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. 2023-07-19 18:15:01,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:15:01,026 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4. 2023-07-19 18:15:01,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 547bae4eb02fe957994c9e66fd3c75c4: 2023-07-19 18:15:01,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:15:01,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853. 2023-07-19 18:15:01,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 75137af7a1516b89d98f767bed5a7853: 2023-07-19 18:15:01,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:01,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:01,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a837b98ff08731d5d582a0ca25ffec8a, disabling compactions & flushes 2023-07-19 18:15:01,029 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. 2023-07-19 18:15:01,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. 2023-07-19 18:15:01,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. after waiting 0 ms 2023-07-19 18:15:01,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. 2023-07-19 18:15:01,029 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=547bae4eb02fe957994c9e66fd3c75c4, regionState=CLOSED 2023-07-19 18:15:01,029 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790501029"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790501029"}]},"ts":"1689790501029"} 2023-07-19 18:15:01,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:01,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:01,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 49bd4495311e987e0e59ab3f393ad6ae, disabling compactions & flushes 2023-07-19 18:15:01,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. 2023-07-19 18:15:01,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. 2023-07-19 18:15:01,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. after waiting 0 ms 2023-07-19 18:15:01,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. 2023-07-19 18:15:01,032 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=75137af7a1516b89d98f767bed5a7853, regionState=CLOSED 2023-07-19 18:15:01,032 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689790501032"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790501032"}]},"ts":"1689790501032"} 2023-07-19 18:15:01,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:15:01,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae. 2023-07-19 18:15:01,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 49bd4495311e987e0e59ab3f393ad6ae: 2023-07-19 18:15:01,039 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:15:01,039 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=147 2023-07-19 18:15:01,039 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=147, state=SUCCESS; CloseRegionProcedure 547bae4eb02fe957994c9e66fd3c75c4, server=jenkins-hbase4.apache.org,43775,1689790473982 in 169 msec 2023-07-19 18:15:01,039 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:01,039 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:01,041 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1e23ca253c36440481cd661f169af0a9, disabling compactions & flushes 2023-07-19 18:15:01,041 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. 2023-07-19 18:15:01,041 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. 2023-07-19 18:15:01,041 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. after waiting 0 ms 2023-07-19 18:15:01,041 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. 2023-07-19 18:15:01,042 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a. 2023-07-19 18:15:01,042 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a837b98ff08731d5d582a0ca25ffec8a: 2023-07-19 18:15:01,043 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=49bd4495311e987e0e59ab3f393ad6ae, regionState=CLOSED 2023-07-19 18:15:01,043 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689790501043"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790501043"}]},"ts":"1689790501043"} 2023-07-19 18:15:01,045 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=148 2023-07-19 18:15:01,045 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=148, state=SUCCESS; CloseRegionProcedure 75137af7a1516b89d98f767bed5a7853, server=jenkins-hbase4.apache.org,40615,1689790473552 in 171 msec 2023-07-19 18:15:01,045 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:01,045 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=547bae4eb02fe957994c9e66fd3c75c4, UNASSIGN in 178 msec 2023-07-19 18:15:01,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:15:01,046 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=a837b98ff08731d5d582a0ca25ffec8a, regionState=CLOSED 2023-07-19 18:15:01,046 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790501046"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790501046"}]},"ts":"1689790501046"} 2023-07-19 18:15:01,046 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9. 2023-07-19 18:15:01,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1e23ca253c36440481cd661f169af0a9: 2023-07-19 18:15:01,051 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=144 2023-07-19 18:15:01,051 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=144, state=SUCCESS; CloseRegionProcedure 49bd4495311e987e0e59ab3f393ad6ae, server=jenkins-hbase4.apache.org,40615,1689790473552 in 179 msec 2023-07-19 18:15:01,051 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=75137af7a1516b89d98f767bed5a7853, UNASSIGN in 184 msec 2023-07-19 18:15:01,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:01,052 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=1e23ca253c36440481cd661f169af0a9, regionState=CLOSED 2023-07-19 18:15:01,052 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689790501052"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790501052"}]},"ts":"1689790501052"} 2023-07-19 18:15:01,053 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=146 2023-07-19 18:15:01,053 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49bd4495311e987e0e59ab3f393ad6ae, UNASSIGN in 190 msec 2023-07-19 18:15:01,053 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; CloseRegionProcedure a837b98ff08731d5d582a0ca25ffec8a, server=jenkins-hbase4.apache.org,43775,1689790473982 in 181 msec 2023-07-19 18:15:01,054 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a837b98ff08731d5d582a0ca25ffec8a, UNASSIGN in 192 msec 2023-07-19 18:15:01,055 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=145 2023-07-19 18:15:01,055 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=145, state=SUCCESS; CloseRegionProcedure 1e23ca253c36440481cd661f169af0a9, server=jenkins-hbase4.apache.org,40615,1689790473552 in 187 msec 2023-07-19 18:15:01,056 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=143 2023-07-19 18:15:01,056 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e23ca253c36440481cd661f169af0a9, UNASSIGN in 194 msec 2023-07-19 18:15:01,057 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790501057"}]},"ts":"1689790501057"} 2023-07-19 18:15:01,058 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-19 18:15:01,061 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-19 18:15:01,064 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 209 msec 2023-07-19 18:15:01,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-19 18:15:01,160 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-19 18:15:01,160 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1704307594 2023-07-19 18:15:01,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1704307594 2023-07-19 18:15:01,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1704307594 2023-07-19 18:15:01,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:01,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:01,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:15:01,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-19 18:15:01,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1704307594, current retry=0 2023-07-19 18:15:01,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1704307594. 2023-07-19 18:15:01,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:01,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:01,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:01,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-19 18:15:01,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:15:01,174 INFO [Listener at localhost/46039] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-19 18:15:01,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-19 18:15:01,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:01,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 921 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:51588 deadline: 1689790561175, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-19 18:15:01,176 DEBUG [Listener at localhost/46039] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-19 18:15:01,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-19 18:15:01,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 18:15:01,179 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 18:15:01,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1704307594' 2023-07-19 18:15:01,179 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 18:15:01,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1704307594 2023-07-19 18:15:01,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:01,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:01,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:15:01,186 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:01,186 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:01,186 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:01,186 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:01,186 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:01,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-19 18:15:01,189 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853/recovered.edits] 2023-07-19 18:15:01,189 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae/recovered.edits] 2023-07-19 18:15:01,189 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4/recovered.edits] 2023-07-19 18:15:01,190 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a/recovered.edits] 2023-07-19 18:15:01,190 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9/f, FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9/recovered.edits] 2023-07-19 18:15:01,199 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853/recovered.edits/4.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853/recovered.edits/4.seqid 2023-07-19 18:15:01,199 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9/recovered.edits/4.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9/recovered.edits/4.seqid 2023-07-19 18:15:01,199 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a/recovered.edits/4.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a/recovered.edits/4.seqid 2023-07-19 18:15:01,200 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4/recovered.edits/4.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4/recovered.edits/4.seqid 2023-07-19 18:15:01,200 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae/recovered.edits/4.seqid to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/archive/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae/recovered.edits/4.seqid 2023-07-19 18:15:01,200 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/75137af7a1516b89d98f767bed5a7853 2023-07-19 18:15:01,200 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/1e23ca253c36440481cd661f169af0a9 2023-07-19 18:15:01,200 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/a837b98ff08731d5d582a0ca25ffec8a 2023-07-19 18:15:01,201 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/547bae4eb02fe957994c9e66fd3c75c4 2023-07-19 18:15:01,201 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/.tmp/data/default/Group_testDisabledTableMove/49bd4495311e987e0e59ab3f393ad6ae 2023-07-19 18:15:01,201 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-19 18:15:01,203 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 18:15:01,205 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-19 18:15:01,211 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-19 18:15:01,212 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 18:15:01,212 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-19 18:15:01,212 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790501212"}]},"ts":"9223372036854775807"} 2023-07-19 18:15:01,212 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790501212"}]},"ts":"9223372036854775807"} 2023-07-19 18:15:01,212 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790501212"}]},"ts":"9223372036854775807"} 2023-07-19 18:15:01,212 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790501212"}]},"ts":"9223372036854775807"} 2023-07-19 18:15:01,212 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790501212"}]},"ts":"9223372036854775807"} 2023-07-19 18:15:01,214 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-19 18:15:01,214 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 49bd4495311e987e0e59ab3f393ad6ae, NAME => 'Group_testDisabledTableMove,,1689790500228.49bd4495311e987e0e59ab3f393ad6ae.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 1e23ca253c36440481cd661f169af0a9, NAME => 'Group_testDisabledTableMove,aaaaa,1689790500228.1e23ca253c36440481cd661f169af0a9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => a837b98ff08731d5d582a0ca25ffec8a, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689790500228.a837b98ff08731d5d582a0ca25ffec8a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 547bae4eb02fe957994c9e66fd3c75c4, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689790500228.547bae4eb02fe957994c9e66fd3c75c4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 75137af7a1516b89d98f767bed5a7853, NAME => 'Group_testDisabledTableMove,zzzzz,1689790500228.75137af7a1516b89d98f767bed5a7853.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-19 18:15:01,214 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-19 18:15:01,214 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689790501214"}]},"ts":"9223372036854775807"} 2023-07-19 18:15:01,216 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-19 18:15:01,217 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 18:15:01,218 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 41 msec 2023-07-19 18:15:01,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-19 18:15:01,290 INFO [Listener at localhost/46039] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-19 18:15:01,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:01,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:01,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:01,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:01,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:01,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:38251] to rsgroup default 2023-07-19 18:15:01,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1704307594 2023-07-19 18:15:01,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:01,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:01,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:15:01,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1704307594, current retry=0 2023-07-19 18:15:01,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38251,1689790473799, jenkins-hbase4.apache.org,38419,1689790478179] are moved back to Group_testDisabledTableMove_1704307594 2023-07-19 18:15:01,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1704307594 => default 2023-07-19 18:15:01,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:01,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1704307594 2023-07-19 18:15:01,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:01,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:01,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 18:15:01,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:01,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:01,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:01,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:01,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:01,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:01,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:01,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:01,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:01,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:01,314 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:01,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:01,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:01,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:01,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:01,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:01,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:01,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:01,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:15:01,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:01,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 955 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791701325, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:15:01,326 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:01,328 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:01,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:01,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:01,329 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:01,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:01,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:01,349 INFO [Listener at localhost/46039] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=515 (was 514) Potentially hanging thread: hconnection-0x22934466-shared-pool-27 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-741684861_17 at /127.0.0.1:34810 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xbd8ecb1-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1281934839_17 at /127.0.0.1:40942 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=795 (was 777) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=474 (was 474), ProcessCount=173 (was 173), AvailableMemoryMB=2604 (was 2622) 2023-07-19 18:15:01,349 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-19 18:15:01,371 INFO [Listener at localhost/46039] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=515, OpenFileDescriptor=795, MaxFileDescriptor=60000, SystemLoadAverage=474, ProcessCount=173, AvailableMemoryMB=2604 2023-07-19 18:15:01,371 WARN [Listener at localhost/46039] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-19 18:15:01,371 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-19 18:15:01,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:01,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:01,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:01,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:01,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:01,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:01,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:01,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:01,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:01,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:01,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:01,385 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:01,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:01,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:01,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:01,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:01,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:01,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:01,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:01,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46739] to rsgroup master 2023-07-19 18:15:01,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:01,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] ipc.CallRunner(144): callId: 983 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51588 deadline: 1689791701398, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. 2023-07-19 18:15:01,399 WARN [Listener at localhost/46039] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46739 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:01,401 INFO [Listener at localhost/46039] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:01,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:01,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:01,402 INFO [Listener at localhost/46039] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38251, jenkins-hbase4.apache.org:38419, jenkins-hbase4.apache.org:40615, jenkins-hbase4.apache.org:43775], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:01,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:01,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46739] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:01,403 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-19 18:15:01,403 INFO [Listener at localhost/46039] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-19 18:15:01,403 DEBUG [Listener at localhost/46039] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x65b017d0 to 127.0.0.1:61716 2023-07-19 18:15:01,403 DEBUG [Listener at localhost/46039] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:01,405 DEBUG [Listener at localhost/46039] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-19 18:15:01,405 DEBUG [Listener at localhost/46039] util.JVMClusterUtil(257): Found active master hash=1074255131, stopped=false 2023-07-19 18:15:01,405 DEBUG [Listener at localhost/46039] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 18:15:01,405 DEBUG [Listener at localhost/46039] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 18:15:01,406 INFO [Listener at localhost/46039] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,46739,1689790471527 2023-07-19 18:15:01,407 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:01,407 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:01,407 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:01,407 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:01,407 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38419-0x1017ecade2e000b, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:01,407 INFO [Listener at localhost/46039] procedure2.ProcedureExecutor(629): Stopping 2023-07-19 18:15:01,407 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:01,407 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:01,408 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:01,408 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:01,408 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:01,408 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38419-0x1017ecade2e000b, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:01,408 DEBUG [Listener at localhost/46039] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39ac2b40 to 127.0.0.1:61716 2023-07-19 18:15:01,408 DEBUG [Listener at localhost/46039] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:01,409 INFO [Listener at localhost/46039] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40615,1689790473552' ***** 2023-07-19 18:15:01,409 INFO [Listener at localhost/46039] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 18:15:01,409 INFO [Listener at localhost/46039] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38251,1689790473799' ***** 2023-07-19 18:15:01,409 INFO [Listener at localhost/46039] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 18:15:01,409 INFO [Listener at localhost/46039] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43775,1689790473982' ***** 2023-07-19 18:15:01,409 INFO [Listener at localhost/46039] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 18:15:01,409 INFO [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:01,409 INFO [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:01,409 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:01,409 INFO [Listener at localhost/46039] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38419,1689790478179' ***** 2023-07-19 18:15:01,410 INFO [Listener at localhost/46039] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 18:15:01,410 INFO [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:01,425 INFO [RS:2;jenkins-hbase4:43775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@60f62ff2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:01,425 INFO [RS:0;jenkins-hbase4:40615] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@75640050{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:01,425 INFO [RS:1;jenkins-hbase4:38251] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7eecefb7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:01,425 INFO [RS:3;jenkins-hbase4:38419] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@289fa920{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:01,430 INFO [RS:0;jenkins-hbase4:40615] server.AbstractConnector(383): Stopped ServerConnector@4aa1e459{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:01,430 INFO [RS:2;jenkins-hbase4:43775] server.AbstractConnector(383): Stopped ServerConnector@447a00c4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:01,430 INFO [RS:0;jenkins-hbase4:40615] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:01,430 INFO [RS:2;jenkins-hbase4:43775] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:01,430 INFO [RS:1;jenkins-hbase4:38251] server.AbstractConnector(383): Stopped ServerConnector@7394d09{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:01,430 INFO [RS:3;jenkins-hbase4:38419] server.AbstractConnector(383): Stopped ServerConnector@35efd609{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:01,430 INFO [RS:1;jenkins-hbase4:38251] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:01,431 INFO [RS:0;jenkins-hbase4:40615] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@180451a2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:01,431 INFO [RS:2;jenkins-hbase4:43775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@22b75a27{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:01,431 INFO [RS:3;jenkins-hbase4:38419] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:01,432 INFO [RS:0;jenkins-hbase4:40615] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4b56872c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:01,432 INFO [RS:1;jenkins-hbase4:38251] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34301e2c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:01,434 INFO [RS:3;jenkins-hbase4:38419] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34f7812e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:01,433 INFO [RS:2;jenkins-hbase4:43775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b0e15fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:01,435 INFO [RS:3;jenkins-hbase4:38419] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@14ac5f55{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:01,434 INFO [RS:1;jenkins-hbase4:38251] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7e84c820{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:01,438 INFO [RS:3;jenkins-hbase4:38419] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 18:15:01,438 INFO [RS:3;jenkins-hbase4:38419] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 18:15:01,438 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 18:15:01,438 INFO [RS:3;jenkins-hbase4:38419] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 18:15:01,438 INFO [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:15:01,438 INFO [RS:0;jenkins-hbase4:40615] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 18:15:01,438 DEBUG [RS:3;jenkins-hbase4:38419] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x714d7cbd to 127.0.0.1:61716 2023-07-19 18:15:01,438 INFO [RS:2;jenkins-hbase4:43775] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 18:15:01,438 DEBUG [RS:3;jenkins-hbase4:38419] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:01,439 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 18:15:01,439 INFO [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38419,1689790478179; all regions closed. 2023-07-19 18:15:01,439 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 18:15:01,439 INFO [RS:0;jenkins-hbase4:40615] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 18:15:01,439 INFO [RS:0;jenkins-hbase4:40615] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 18:15:01,439 INFO [RS:2;jenkins-hbase4:43775] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 18:15:01,439 INFO [RS:2;jenkins-hbase4:43775] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 18:15:01,439 INFO [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(3305): Received CLOSE for 4804c008141c21a22bec55f72429fc21 2023-07-19 18:15:01,439 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(3305): Received CLOSE for d4a7e11c797a3cd910bfdb20bb1edade 2023-07-19 18:15:01,440 INFO [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:01,440 DEBUG [RS:0;jenkins-hbase4:40615] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x04d8e1ed to 127.0.0.1:61716 2023-07-19 18:15:01,440 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(3305): Received CLOSE for 9ea4dee563e7f0f7a6c584dc1c5c929d 2023-07-19 18:15:01,440 DEBUG [RS:0;jenkins-hbase4:40615] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:01,440 INFO [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 18:15:01,440 DEBUG [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1478): Online Regions={4804c008141c21a22bec55f72429fc21=testRename,,1689790494603.4804c008141c21a22bec55f72429fc21.} 2023-07-19 18:15:01,440 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(3305): Received CLOSE for d86f944363fe6bb7338c25a127959763 2023-07-19 18:15:01,440 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:15:01,440 DEBUG [RS:2;jenkins-hbase4:43775] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x28fdd382 to 127.0.0.1:61716 2023-07-19 18:15:01,440 DEBUG [RS:2;jenkins-hbase4:43775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:01,440 INFO [RS:2;jenkins-hbase4:43775] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 18:15:01,441 INFO [RS:2;jenkins-hbase4:43775] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 18:15:01,441 INFO [RS:2;jenkins-hbase4:43775] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 18:15:01,441 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-19 18:15:01,441 DEBUG [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1504): Waiting on 4804c008141c21a22bec55f72429fc21 2023-07-19 18:15:01,443 INFO [RS:1;jenkins-hbase4:38251] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 18:15:01,443 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-19 18:15:01,443 INFO [RS:1;jenkins-hbase4:38251] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 18:15:01,444 INFO [RS:1;jenkins-hbase4:38251] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 18:15:01,444 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 18:15:01,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4804c008141c21a22bec55f72429fc21, disabling compactions & flushes 2023-07-19 18:15:01,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d4a7e11c797a3cd910bfdb20bb1edade, disabling compactions & flushes 2023-07-19 18:15:01,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:15:01,444 INFO [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:15:01,444 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 18:15:01,443 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1478): Online Regions={d4a7e11c797a3cd910bfdb20bb1edade=unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade., 1588230740=hbase:meta,,1.1588230740, 9ea4dee563e7f0f7a6c584dc1c5c929d=hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d., d86f944363fe6bb7338c25a127959763=hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763.} 2023-07-19 18:15:01,444 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 18:15:01,444 DEBUG [RS:1;jenkins-hbase4:38251] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x48896ff7 to 127.0.0.1:61716 2023-07-19 18:15:01,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:15:01,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:15:01,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. after waiting 0 ms 2023-07-19 18:15:01,444 DEBUG [RS:1;jenkins-hbase4:38251] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:01,444 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1504): Waiting on 1588230740, 9ea4dee563e7f0f7a6c584dc1c5c929d, d4a7e11c797a3cd910bfdb20bb1edade, d86f944363fe6bb7338c25a127959763 2023-07-19 18:15:01,444 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 18:15:01,444 INFO [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38251,1689790473799; all regions closed. 2023-07-19 18:15:01,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:15:01,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:15:01,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. after waiting 0 ms 2023-07-19 18:15:01,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:15:01,445 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 18:15:01,445 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 18:15:01,445 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.48 KB heapSize=61.13 KB 2023-07-19 18:15:01,473 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/testRename/4804c008141c21a22bec55f72429fc21/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-19 18:15:01,475 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:15:01,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4804c008141c21a22bec55f72429fc21: 2023-07-19 18:15:01,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689790494603.4804c008141c21a22bec55f72429fc21. 2023-07-19 18:15:01,476 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:01,476 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:01,476 DEBUG [RS:3;jenkins-hbase4:38419] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs 2023-07-19 18:15:01,476 INFO [RS:3;jenkins-hbase4:38419] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38419%2C1689790478179:(num 1689790478619) 2023-07-19 18:15:01,476 DEBUG [RS:3;jenkins-hbase4:38419] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:01,477 INFO [RS:3;jenkins-hbase4:38419] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:01,478 INFO [RS:3;jenkins-hbase4:38419] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:01,479 DEBUG [RS:1;jenkins-hbase4:38251] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs 2023-07-19 18:15:01,479 INFO [RS:1;jenkins-hbase4:38251] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38251%2C1689790473799:(num 1689790476311) 2023-07-19 18:15:01,479 INFO [RS:3;jenkins-hbase4:38419] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 18:15:01,479 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:01,479 DEBUG [RS:1;jenkins-hbase4:38251] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:01,479 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:01,479 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:01,479 INFO [RS:3;jenkins-hbase4:38419] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 18:15:01,479 INFO [RS:1;jenkins-hbase4:38251] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:01,480 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/default/unmovedTable/d4a7e11c797a3cd910bfdb20bb1edade/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-19 18:15:01,480 INFO [RS:1;jenkins-hbase4:38251] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:01,479 INFO [RS:3;jenkins-hbase4:38419] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 18:15:01,481 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:01,481 INFO [RS:1;jenkins-hbase4:38251] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 18:15:01,483 INFO [RS:1;jenkins-hbase4:38251] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 18:15:01,483 INFO [RS:1;jenkins-hbase4:38251] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 18:15:01,483 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:15:01,483 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d4a7e11c797a3cd910bfdb20bb1edade: 2023-07-19 18:15:01,482 INFO [RS:3;jenkins-hbase4:38419] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38419 2023-07-19 18:15:01,483 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689790496259.d4a7e11c797a3cd910bfdb20bb1edade. 2023-07-19 18:15:01,484 INFO [RS:1;jenkins-hbase4:38251] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38251 2023-07-19 18:15:01,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9ea4dee563e7f0f7a6c584dc1c5c929d, disabling compactions & flushes 2023-07-19 18:15:01,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:15:01,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:15:01,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. after waiting 0 ms 2023-07-19 18:15:01,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:15:01,495 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.56 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/info/2466fcc2c66144509d5b7ba41542ad14 2023-07-19 18:15:01,501 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:15:01,501 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:01,501 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:01,502 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:15:01,503 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38419,1689790478179] 2023-07-19 18:15:01,503 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38419,1689790478179; numProcessing=1 2023-07-19 18:15:01,503 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:15:01,504 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:01,502 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38419-0x1017ecade2e000b, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38419,1689790478179 2023-07-19 18:15:01,503 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:01,504 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38419-0x1017ecade2e000b, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:01,505 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:15:01,505 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:01,505 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38419,1689790478179 already deleted, retry=false 2023-07-19 18:15:01,505 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38419,1689790478179 expired; onlineServers=3 2023-07-19 18:15:01,505 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:15:01,505 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38251,1689790473799 2023-07-19 18:15:01,506 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38251,1689790473799] 2023-07-19 18:15:01,506 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38251,1689790473799; numProcessing=2 2023-07-19 18:15:01,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/namespace/9ea4dee563e7f0f7a6c584dc1c5c929d/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-19 18:15:01,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:15:01,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9ea4dee563e7f0f7a6c584dc1c5c929d: 2023-07-19 18:15:01,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689790476992.9ea4dee563e7f0f7a6c584dc1c5c929d. 2023-07-19 18:15:01,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d86f944363fe6bb7338c25a127959763, disabling compactions & flushes 2023-07-19 18:15:01,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:15:01,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:15:01,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. after waiting 0 ms 2023-07-19 18:15:01,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:15:01,509 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38251,1689790473799 already deleted, retry=false 2023-07-19 18:15:01,509 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38251,1689790473799 expired; onlineServers=2 2023-07-19 18:15:01,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing d86f944363fe6bb7338c25a127959763 1/1 column families, dataSize=27.12 KB heapSize=44.66 KB 2023-07-19 18:15:01,510 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2466fcc2c66144509d5b7ba41542ad14 2023-07-19 18:15:01,556 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.12 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/.tmp/m/19f444f40e1841a7b62d7351b35bf373 2023-07-19 18:15:01,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 19f444f40e1841a7b62d7351b35bf373 2023-07-19 18:15:01,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/.tmp/m/19f444f40e1841a7b62d7351b35bf373 as hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/m/19f444f40e1841a7b62d7351b35bf373 2023-07-19 18:15:01,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 19f444f40e1841a7b62d7351b35bf373 2023-07-19 18:15:01,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/m/19f444f40e1841a7b62d7351b35bf373, entries=28, sequenceid=101, filesize=6.1 K 2023-07-19 18:15:01,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.12 KB/27771, heapSize ~44.64 KB/45712, currentSize=0 B/0 for d86f944363fe6bb7338c25a127959763 in 68ms, sequenceid=101, compaction requested=false 2023-07-19 18:15:01,591 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/rsgroup/d86f944363fe6bb7338c25a127959763/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-19 18:15:01,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 18:15:01,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:15:01,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d86f944363fe6bb7338c25a127959763: 2023-07-19 18:15:01,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689790476891.d86f944363fe6bb7338c25a127959763. 2023-07-19 18:15:01,607 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38419-0x1017ecade2e000b, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:01,608 INFO [RS:3;jenkins-hbase4:38419] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38419,1689790478179; zookeeper connection closed. 2023-07-19 18:15:01,608 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38419-0x1017ecade2e000b, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:01,608 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7f0784ec] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7f0784ec 2023-07-19 18:15:01,609 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:01,609 INFO [RS:1;jenkins-hbase4:38251] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38251,1689790473799; zookeeper connection closed. 2023-07-19 18:15:01,609 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:38251-0x1017ecade2e0002, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:01,610 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@38842f17] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@38842f17 2023-07-19 18:15:01,641 INFO [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40615,1689790473552; all regions closed. 2023-07-19 18:15:01,645 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-19 18:15:01,648 DEBUG [RS:0;jenkins-hbase4:40615] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs 2023-07-19 18:15:01,648 INFO [RS:0;jenkins-hbase4:40615] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40615%2C1689790473552.meta:.meta(num 1689790476632) 2023-07-19 18:15:01,655 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/WALs/jenkins-hbase4.apache.org,40615,1689790473552/jenkins-hbase4.apache.org%2C40615%2C1689790473552.1689790476311 not finished, retry = 0 2023-07-19 18:15:01,747 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 18:15:01,747 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 18:15:01,758 DEBUG [RS:0;jenkins-hbase4:40615] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs 2023-07-19 18:15:01,758 INFO [RS:0;jenkins-hbase4:40615] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40615%2C1689790473552:(num 1689790476311) 2023-07-19 18:15:01,758 DEBUG [RS:0;jenkins-hbase4:40615] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:01,758 INFO [RS:0;jenkins-hbase4:40615] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:01,759 INFO [RS:0;jenkins-hbase4:40615] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:01,759 INFO [RS:0;jenkins-hbase4:40615] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 18:15:01,759 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:01,759 INFO [RS:0;jenkins-hbase4:40615] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 18:15:01,759 INFO [RS:0;jenkins-hbase4:40615] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 18:15:01,760 INFO [RS:0;jenkins-hbase4:40615] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40615 2023-07-19 18:15:01,762 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:01,762 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:01,762 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40615,1689790473552 2023-07-19 18:15:01,763 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40615,1689790473552] 2023-07-19 18:15:01,763 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40615,1689790473552; numProcessing=3 2023-07-19 18:15:01,765 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40615,1689790473552 already deleted, retry=false 2023-07-19 18:15:01,765 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40615,1689790473552 expired; onlineServers=1 2023-07-19 18:15:01,845 DEBUG [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-19 18:15:01,947 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/rep_barrier/3db765ff4474447b96e37cc6dbc90492 2023-07-19 18:15:01,953 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3db765ff4474447b96e37cc6dbc90492 2023-07-19 18:15:01,968 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/table/42fd451ea0bf48c28dce78f7c75f4e0e 2023-07-19 18:15:01,976 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 42fd451ea0bf48c28dce78f7c75f4e0e 2023-07-19 18:15:01,977 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/info/2466fcc2c66144509d5b7ba41542ad14 as hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/info/2466fcc2c66144509d5b7ba41542ad14 2023-07-19 18:15:01,985 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2466fcc2c66144509d5b7ba41542ad14 2023-07-19 18:15:01,985 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/info/2466fcc2c66144509d5b7ba41542ad14, entries=62, sequenceid=210, filesize=11.9 K 2023-07-19 18:15:01,986 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/rep_barrier/3db765ff4474447b96e37cc6dbc90492 as hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/rep_barrier/3db765ff4474447b96e37cc6dbc90492 2023-07-19 18:15:02,000 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3db765ff4474447b96e37cc6dbc90492 2023-07-19 18:15:02,000 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/rep_barrier/3db765ff4474447b96e37cc6dbc90492, entries=8, sequenceid=210, filesize=5.8 K 2023-07-19 18:15:02,001 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/.tmp/table/42fd451ea0bf48c28dce78f7c75f4e0e as hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/table/42fd451ea0bf48c28dce78f7c75f4e0e 2023-07-19 18:15:02,008 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 42fd451ea0bf48c28dce78f7c75f4e0e 2023-07-19 18:15:02,008 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/table/42fd451ea0bf48c28dce78f7c75f4e0e, entries=16, sequenceid=210, filesize=6.0 K 2023-07-19 18:15:02,010 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.48 KB/38382, heapSize ~61.08 KB/62544, currentSize=0 B/0 for 1588230740 in 565ms, sequenceid=210, compaction requested=false 2023-07-19 18:15:02,010 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-19 18:15:02,027 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=98 2023-07-19 18:15:02,028 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 18:15:02,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 18:15:02,029 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 18:15:02,029 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-19 18:15:02,045 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43775,1689790473982; all regions closed. 2023-07-19 18:15:02,054 DEBUG [RS:2;jenkins-hbase4:43775] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs 2023-07-19 18:15:02,054 INFO [RS:2;jenkins-hbase4:43775] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43775%2C1689790473982.meta:.meta(num 1689790486075) 2023-07-19 18:15:02,064 DEBUG [RS:2;jenkins-hbase4:43775] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/oldWALs 2023-07-19 18:15:02,064 INFO [RS:2;jenkins-hbase4:43775] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43775%2C1689790473982:(num 1689790476311) 2023-07-19 18:15:02,064 DEBUG [RS:2;jenkins-hbase4:43775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:02,064 INFO [RS:2;jenkins-hbase4:43775] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:02,065 INFO [RS:2;jenkins-hbase4:43775] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:02,065 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:02,066 INFO [RS:2;jenkins-hbase4:43775] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43775 2023-07-19 18:15:02,069 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:02,069 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43775,1689790473982 2023-07-19 18:15:02,070 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43775,1689790473982] 2023-07-19 18:15:02,071 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43775,1689790473982; numProcessing=4 2023-07-19 18:15:02,072 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43775,1689790473982 already deleted, retry=false 2023-07-19 18:15:02,072 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43775,1689790473982 expired; onlineServers=0 2023-07-19 18:15:02,072 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46739,1689790471527' ***** 2023-07-19 18:15:02,072 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-19 18:15:02,073 DEBUG [M:0;jenkins-hbase4:46739] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13adf20, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:15:02,073 INFO [M:0;jenkins-hbase4:46739] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:02,075 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-19 18:15:02,075 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:02,076 INFO [M:0;jenkins-hbase4:46739] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@43d80c2b{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 18:15:02,076 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:15:02,076 INFO [M:0;jenkins-hbase4:46739] server.AbstractConnector(383): Stopped ServerConnector@5038041a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:02,076 INFO [M:0;jenkins-hbase4:46739] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:02,077 INFO [M:0;jenkins-hbase4:46739] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7522554c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:02,077 INFO [M:0;jenkins-hbase4:46739] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@51a5baa3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:02,077 INFO [M:0;jenkins-hbase4:46739] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46739,1689790471527 2023-07-19 18:15:02,078 INFO [M:0;jenkins-hbase4:46739] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46739,1689790471527; all regions closed. 2023-07-19 18:15:02,078 DEBUG [M:0;jenkins-hbase4:46739] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:02,078 INFO [M:0;jenkins-hbase4:46739] master.HMaster(1491): Stopping master jetty server 2023-07-19 18:15:02,078 INFO [M:0;jenkins-hbase4:46739] server.AbstractConnector(383): Stopped ServerConnector@17b5595c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:02,079 DEBUG [M:0;jenkins-hbase4:46739] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-19 18:15:02,079 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-19 18:15:02,079 DEBUG [M:0;jenkins-hbase4:46739] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-19 18:15:02,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689790475822] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689790475822,5,FailOnTimeoutGroup] 2023-07-19 18:15:02,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689790475823] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689790475823,5,FailOnTimeoutGroup] 2023-07-19 18:15:02,079 INFO [M:0;jenkins-hbase4:46739] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-19 18:15:02,079 INFO [M:0;jenkins-hbase4:46739] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-19 18:15:02,079 INFO [M:0;jenkins-hbase4:46739] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-19 18:15:02,079 DEBUG [M:0;jenkins-hbase4:46739] master.HMaster(1512): Stopping service threads 2023-07-19 18:15:02,079 INFO [M:0;jenkins-hbase4:46739] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-19 18:15:02,080 ERROR [M:0;jenkins-hbase4:46739] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-19 18:15:02,080 INFO [M:0;jenkins-hbase4:46739] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-19 18:15:02,081 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-19 18:15:02,081 DEBUG [M:0;jenkins-hbase4:46739] zookeeper.ZKUtil(398): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-19 18:15:02,081 WARN [M:0;jenkins-hbase4:46739] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-19 18:15:02,081 INFO [M:0;jenkins-hbase4:46739] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-19 18:15:02,081 INFO [M:0;jenkins-hbase4:46739] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-19 18:15:02,082 DEBUG [M:0;jenkins-hbase4:46739] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 18:15:02,082 INFO [M:0;jenkins-hbase4:46739] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:02,082 DEBUG [M:0;jenkins-hbase4:46739] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:02,082 DEBUG [M:0;jenkins-hbase4:46739] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 18:15:02,082 DEBUG [M:0;jenkins-hbase4:46739] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:02,082 INFO [M:0;jenkins-hbase4:46739] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.00 KB heapSize=621.09 KB 2023-07-19 18:15:02,100 INFO [M:0;jenkins-hbase4:46739] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.00 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/be5d0bf921a34488b5051acecec57386 2023-07-19 18:15:02,107 DEBUG [M:0;jenkins-hbase4:46739] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/be5d0bf921a34488b5051acecec57386 as hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/be5d0bf921a34488b5051acecec57386 2023-07-19 18:15:02,115 INFO [M:0;jenkins-hbase4:46739] regionserver.HStore(1080): Added hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/be5d0bf921a34488b5051acecec57386, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-19 18:15:02,116 INFO [M:0;jenkins-hbase4:46739] regionserver.HRegion(2948): Finished flush of dataSize ~519.00 KB/531458, heapSize ~621.07 KB/635976, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 34ms, sequenceid=1152, compaction requested=false 2023-07-19 18:15:02,118 INFO [M:0;jenkins-hbase4:46739] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:02,118 DEBUG [M:0;jenkins-hbase4:46739] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 18:15:02,123 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/MasterData/WALs/jenkins-hbase4.apache.org,46739,1689790471527/jenkins-hbase4.apache.org%2C46739%2C1689790471527.1689790474775 not finished, retry = 0 2023-07-19 18:15:02,209 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:02,209 INFO [RS:2;jenkins-hbase4:43775] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43775,1689790473982; zookeeper connection closed. 2023-07-19 18:15:02,209 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:43775-0x1017ecade2e0003, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:02,215 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@58dcdd7b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@58dcdd7b 2023-07-19 18:15:02,225 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:02,225 INFO [M:0;jenkins-hbase4:46739] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-19 18:15:02,226 INFO [M:0;jenkins-hbase4:46739] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46739 2023-07-19 18:15:02,229 DEBUG [M:0;jenkins-hbase4:46739] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,46739,1689790471527 already deleted, retry=false 2023-07-19 18:15:02,309 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:02,309 INFO [RS:0;jenkins-hbase4:40615] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40615,1689790473552; zookeeper connection closed. 2023-07-19 18:15:02,309 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): regionserver:40615-0x1017ecade2e0001, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:02,310 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1a46688e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1a46688e 2023-07-19 18:15:02,310 INFO [Listener at localhost/46039] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-19 18:15:02,410 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:02,410 INFO [M:0;jenkins-hbase4:46739] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46739,1689790471527; zookeeper connection closed. 2023-07-19 18:15:02,410 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): master:46739-0x1017ecade2e0000, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:02,412 WARN [Listener at localhost/46039] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 18:15:02,416 INFO [Listener at localhost/46039] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:02,518 WARN [BP-1139031693-172.31.14.131-1689790468337 heartbeating to localhost/127.0.0.1:41243] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 18:15:02,519 WARN [BP-1139031693-172.31.14.131-1689790468337 heartbeating to localhost/127.0.0.1:41243] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1139031693-172.31.14.131-1689790468337 (Datanode Uuid f62b0187-6db6-43d7-a389-69e2232367be) service to localhost/127.0.0.1:41243 2023-07-19 18:15:02,520 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/dfs/data/data5/current/BP-1139031693-172.31.14.131-1689790468337] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:02,521 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/dfs/data/data6/current/BP-1139031693-172.31.14.131-1689790468337] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:02,522 WARN [Listener at localhost/46039] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 18:15:02,525 INFO [Listener at localhost/46039] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:02,628 WARN [BP-1139031693-172.31.14.131-1689790468337 heartbeating to localhost/127.0.0.1:41243] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 18:15:02,629 WARN [BP-1139031693-172.31.14.131-1689790468337 heartbeating to localhost/127.0.0.1:41243] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1139031693-172.31.14.131-1689790468337 (Datanode Uuid 462ffe16-dd7d-403c-a132-8d98d1e9d939) service to localhost/127.0.0.1:41243 2023-07-19 18:15:02,629 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/dfs/data/data3/current/BP-1139031693-172.31.14.131-1689790468337] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:02,630 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/dfs/data/data4/current/BP-1139031693-172.31.14.131-1689790468337] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:02,631 WARN [Listener at localhost/46039] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 18:15:02,634 INFO [Listener at localhost/46039] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:02,737 WARN [BP-1139031693-172.31.14.131-1689790468337 heartbeating to localhost/127.0.0.1:41243] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 18:15:02,737 WARN [BP-1139031693-172.31.14.131-1689790468337 heartbeating to localhost/127.0.0.1:41243] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1139031693-172.31.14.131-1689790468337 (Datanode Uuid 3f572fc8-b3d9-423f-88eb-2c4098dce574) service to localhost/127.0.0.1:41243 2023-07-19 18:15:02,737 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/dfs/data/data1/current/BP-1139031693-172.31.14.131-1689790468337] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:02,738 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/cluster_e8519d95-cc0e-c614-d38c-33ea358da3b8/dfs/data/data2/current/BP-1139031693-172.31.14.131-1689790468337] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:02,769 INFO [Listener at localhost/46039] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:02,888 INFO [Listener at localhost/46039] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-19 18:15:02,937 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-19 18:15:02,937 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-19 18:15:02,937 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.log.dir so I do NOT create it in target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a 2023-07-19 18:15:02,938 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/89202250-bd9b-68c2-7d09-ac839b1c24bb/hadoop.tmp.dir so I do NOT create it in target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a 2023-07-19 18:15:02,938 INFO [Listener at localhost/46039] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/cluster_7389c4b2-efdf-2cf4-3ef3-9acc238f7553, deleteOnExit=true 2023-07-19 18:15:02,938 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-19 18:15:02,938 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/test.cache.data in system properties and HBase conf 2023-07-19 18:15:02,938 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.tmp.dir in system properties and HBase conf 2023-07-19 18:15:02,938 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.log.dir in system properties and HBase conf 2023-07-19 18:15:02,939 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-19 18:15:02,939 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-19 18:15:02,939 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-19 18:15:02,939 DEBUG [Listener at localhost/46039] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-19 18:15:02,939 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-19 18:15:02,939 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-19 18:15:02,940 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-19 18:15:02,940 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 18:15:02,940 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-19 18:15:02,940 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-19 18:15:02,940 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 18:15:02,940 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 18:15:02,940 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-19 18:15:02,940 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/nfs.dump.dir in system properties and HBase conf 2023-07-19 18:15:02,940 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/java.io.tmpdir in system properties and HBase conf 2023-07-19 18:15:02,941 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 18:15:02,941 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-19 18:15:02,941 INFO [Listener at localhost/46039] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-19 18:15:02,945 WARN [Listener at localhost/46039] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 18:15:02,945 WARN [Listener at localhost/46039] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 18:15:02,985 DEBUG [Listener at localhost/46039-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1017ecade2e000a, quorum=127.0.0.1:61716, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-19 18:15:02,985 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1017ecade2e000a, quorum=127.0.0.1:61716, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-19 18:15:02,989 WARN [Listener at localhost/46039] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:15:02,992 INFO [Listener at localhost/46039] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:15:02,997 INFO [Listener at localhost/46039] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/java.io.tmpdir/Jetty_localhost_34867_hdfs____hn2zs2/webapp 2023-07-19 18:15:03,099 INFO [Listener at localhost/46039] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34867 2023-07-19 18:15:03,103 WARN [Listener at localhost/46039] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 18:15:03,103 WARN [Listener at localhost/46039] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 18:15:03,142 WARN [Listener at localhost/39265] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:15:03,160 WARN [Listener at localhost/39265] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-19 18:15:03,221 WARN [Listener at localhost/39265] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 18:15:03,223 WARN [Listener at localhost/39265] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:15:03,224 INFO [Listener at localhost/39265] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:15:03,229 INFO [Listener at localhost/39265] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/java.io.tmpdir/Jetty_localhost_36697_datanode____.jiv9xk/webapp 2023-07-19 18:15:03,327 INFO [Listener at localhost/39265] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36697 2023-07-19 18:15:03,334 WARN [Listener at localhost/43709] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:15:03,352 WARN [Listener at localhost/43709] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 18:15:03,354 WARN [Listener at localhost/43709] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:15:03,355 INFO [Listener at localhost/43709] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:15:03,359 INFO [Listener at localhost/43709] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/java.io.tmpdir/Jetty_localhost_44381_datanode____.hkskkt/webapp 2023-07-19 18:15:03,446 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x494ab75e0ebd9333: Processing first storage report for DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8 from datanode 7d00d50d-525b-4344-964b-46435a7e595d 2023-07-19 18:15:03,446 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x494ab75e0ebd9333: from storage DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8 node DatanodeRegistration(127.0.0.1:36427, datanodeUuid=7d00d50d-525b-4344-964b-46435a7e595d, infoPort=43195, infoSecurePort=0, ipcPort=43709, storageInfo=lv=-57;cid=testClusterID;nsid=1318643282;c=1689790502948), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-19 18:15:03,446 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x494ab75e0ebd9333: Processing first storage report for DS-73bf1c6d-4071-4eda-9db9-66024507f26f from datanode 7d00d50d-525b-4344-964b-46435a7e595d 2023-07-19 18:15:03,446 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x494ab75e0ebd9333: from storage DS-73bf1c6d-4071-4eda-9db9-66024507f26f node DatanodeRegistration(127.0.0.1:36427, datanodeUuid=7d00d50d-525b-4344-964b-46435a7e595d, infoPort=43195, infoSecurePort=0, ipcPort=43709, storageInfo=lv=-57;cid=testClusterID;nsid=1318643282;c=1689790502948), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:15:03,479 INFO [Listener at localhost/43709] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44381 2023-07-19 18:15:03,486 WARN [Listener at localhost/34385] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:15:03,503 WARN [Listener at localhost/34385] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 18:15:03,506 WARN [Listener at localhost/34385] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:15:03,508 INFO [Listener at localhost/34385] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:15:03,513 INFO [Listener at localhost/34385] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/java.io.tmpdir/Jetty_localhost_43757_datanode____fj5x5t/webapp 2023-07-19 18:15:03,596 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x871db40e31769993: Processing first storage report for DS-c578ab95-2dac-4564-b255-cb2ec84837a1 from datanode 6f486418-2bef-444c-98ea-c19926f1d7ff 2023-07-19 18:15:03,596 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x871db40e31769993: from storage DS-c578ab95-2dac-4564-b255-cb2ec84837a1 node DatanodeRegistration(127.0.0.1:36363, datanodeUuid=6f486418-2bef-444c-98ea-c19926f1d7ff, infoPort=36543, infoSecurePort=0, ipcPort=34385, storageInfo=lv=-57;cid=testClusterID;nsid=1318643282;c=1689790502948), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-19 18:15:03,597 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x871db40e31769993: Processing first storage report for DS-1c32be56-0555-4ab4-a69a-f24f79bb905d from datanode 6f486418-2bef-444c-98ea-c19926f1d7ff 2023-07-19 18:15:03,597 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x871db40e31769993: from storage DS-1c32be56-0555-4ab4-a69a-f24f79bb905d node DatanodeRegistration(127.0.0.1:36363, datanodeUuid=6f486418-2bef-444c-98ea-c19926f1d7ff, infoPort=36543, infoSecurePort=0, ipcPort=34385, storageInfo=lv=-57;cid=testClusterID;nsid=1318643282;c=1689790502948), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:15:03,621 INFO [Listener at localhost/34385] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43757 2023-07-19 18:15:03,629 WARN [Listener at localhost/37435] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:15:03,734 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xac440b3fb575868a: Processing first storage report for DS-c01882ed-5342-41e5-9b0b-4e48e32377d9 from datanode aa8454ef-ff0b-48ae-8802-2a1d3aab3d43 2023-07-19 18:15:03,734 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xac440b3fb575868a: from storage DS-c01882ed-5342-41e5-9b0b-4e48e32377d9 node DatanodeRegistration(127.0.0.1:45971, datanodeUuid=aa8454ef-ff0b-48ae-8802-2a1d3aab3d43, infoPort=35545, infoSecurePort=0, ipcPort=37435, storageInfo=lv=-57;cid=testClusterID;nsid=1318643282;c=1689790502948), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:15:03,735 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xac440b3fb575868a: Processing first storage report for DS-70be5295-9e78-4618-a464-d55a56124503 from datanode aa8454ef-ff0b-48ae-8802-2a1d3aab3d43 2023-07-19 18:15:03,735 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xac440b3fb575868a: from storage DS-70be5295-9e78-4618-a464-d55a56124503 node DatanodeRegistration(127.0.0.1:45971, datanodeUuid=aa8454ef-ff0b-48ae-8802-2a1d3aab3d43, infoPort=35545, infoSecurePort=0, ipcPort=37435, storageInfo=lv=-57;cid=testClusterID;nsid=1318643282;c=1689790502948), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:15:03,739 DEBUG [Listener at localhost/37435] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a 2023-07-19 18:15:03,742 INFO [Listener at localhost/37435] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/cluster_7389c4b2-efdf-2cf4-3ef3-9acc238f7553/zookeeper_0, clientPort=55505, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/cluster_7389c4b2-efdf-2cf4-3ef3-9acc238f7553/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/cluster_7389c4b2-efdf-2cf4-3ef3-9acc238f7553/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-19 18:15:03,743 INFO [Listener at localhost/37435] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55505 2023-07-19 18:15:03,743 INFO [Listener at localhost/37435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:03,744 INFO [Listener at localhost/37435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:03,763 INFO [Listener at localhost/37435] util.FSUtils(471): Created version file at hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6 with version=8 2023-07-19 18:15:03,763 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/hbase-staging 2023-07-19 18:15:03,764 DEBUG [Listener at localhost/37435] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-19 18:15:03,764 DEBUG [Listener at localhost/37435] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-19 18:15:03,764 DEBUG [Listener at localhost/37435] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-19 18:15:03,764 DEBUG [Listener at localhost/37435] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-19 18:15:03,765 INFO [Listener at localhost/37435] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:15:03,766 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:03,766 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:03,766 INFO [Listener at localhost/37435] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:15:03,766 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:03,766 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:15:03,766 INFO [Listener at localhost/37435] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:15:03,767 INFO [Listener at localhost/37435] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33827 2023-07-19 18:15:03,767 INFO [Listener at localhost/37435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:03,768 INFO [Listener at localhost/37435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:03,769 INFO [Listener at localhost/37435] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33827 connecting to ZooKeeper ensemble=127.0.0.1:55505 2023-07-19 18:15:03,777 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:338270x0, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:15:03,777 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33827-0x1017ecb5e400000 connected 2023-07-19 18:15:03,791 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:15:03,792 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:03,792 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:15:03,792 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33827 2023-07-19 18:15:03,793 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33827 2023-07-19 18:15:03,793 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33827 2023-07-19 18:15:03,793 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33827 2023-07-19 18:15:03,793 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33827 2023-07-19 18:15:03,796 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:15:03,796 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:15:03,796 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:15:03,796 INFO [Listener at localhost/37435] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-19 18:15:03,796 INFO [Listener at localhost/37435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:15:03,797 INFO [Listener at localhost/37435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:15:03,797 INFO [Listener at localhost/37435] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:15:03,797 INFO [Listener at localhost/37435] http.HttpServer(1146): Jetty bound to port 45437 2023-07-19 18:15:03,797 INFO [Listener at localhost/37435] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:15:03,802 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:03,802 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e960f4c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:15:03,803 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:03,803 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f546421{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:15:03,922 INFO [Listener at localhost/37435] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:15:03,923 INFO [Listener at localhost/37435] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:15:03,924 INFO [Listener at localhost/37435] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:15:03,924 INFO [Listener at localhost/37435] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 18:15:03,925 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:03,926 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@38a9d218{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/java.io.tmpdir/jetty-0_0_0_0-45437-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2475355116906305806/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 18:15:03,927 INFO [Listener at localhost/37435] server.AbstractConnector(333): Started ServerConnector@78e67007{HTTP/1.1, (http/1.1)}{0.0.0.0:45437} 2023-07-19 18:15:03,927 INFO [Listener at localhost/37435] server.Server(415): Started @37635ms 2023-07-19 18:15:03,927 INFO [Listener at localhost/37435] master.HMaster(444): hbase.rootdir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6, hbase.cluster.distributed=false 2023-07-19 18:15:03,944 INFO [Listener at localhost/37435] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:15:03,944 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:03,944 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:03,945 INFO [Listener at localhost/37435] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:15:03,945 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:03,945 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:15:03,945 INFO [Listener at localhost/37435] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:15:03,946 INFO [Listener at localhost/37435] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45277 2023-07-19 18:15:03,946 INFO [Listener at localhost/37435] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 18:15:03,947 DEBUG [Listener at localhost/37435] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 18:15:03,947 INFO [Listener at localhost/37435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:03,949 INFO [Listener at localhost/37435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:03,950 INFO [Listener at localhost/37435] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45277 connecting to ZooKeeper ensemble=127.0.0.1:55505 2023-07-19 18:15:03,953 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:452770x0, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:15:03,954 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45277-0x1017ecb5e400001 connected 2023-07-19 18:15:03,955 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:15:03,955 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:03,956 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:15:03,957 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45277 2023-07-19 18:15:03,958 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45277 2023-07-19 18:15:03,959 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45277 2023-07-19 18:15:03,974 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45277 2023-07-19 18:15:03,975 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45277 2023-07-19 18:15:03,977 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:15:03,977 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:15:03,977 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:15:03,978 INFO [Listener at localhost/37435] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 18:15:03,978 INFO [Listener at localhost/37435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:15:03,978 INFO [Listener at localhost/37435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:15:03,978 INFO [Listener at localhost/37435] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:15:03,979 INFO [Listener at localhost/37435] http.HttpServer(1146): Jetty bound to port 41991 2023-07-19 18:15:03,979 INFO [Listener at localhost/37435] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:15:03,980 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:03,981 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ea82fff{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:15:03,981 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:03,981 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d8a119b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:15:04,097 INFO [Listener at localhost/37435] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:15:04,098 INFO [Listener at localhost/37435] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:15:04,098 INFO [Listener at localhost/37435] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:15:04,098 INFO [Listener at localhost/37435] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 18:15:04,099 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:04,100 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@27061e18{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/java.io.tmpdir/jetty-0_0_0_0-41991-hbase-server-2_4_18-SNAPSHOT_jar-_-any-326477908371130614/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:04,101 INFO [Listener at localhost/37435] server.AbstractConnector(333): Started ServerConnector@619d9d61{HTTP/1.1, (http/1.1)}{0.0.0.0:41991} 2023-07-19 18:15:04,101 INFO [Listener at localhost/37435] server.Server(415): Started @37810ms 2023-07-19 18:15:04,114 INFO [Listener at localhost/37435] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:15:04,114 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:04,114 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:04,114 INFO [Listener at localhost/37435] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:15:04,114 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:04,114 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:15:04,114 INFO [Listener at localhost/37435] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:15:04,115 INFO [Listener at localhost/37435] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38039 2023-07-19 18:15:04,116 INFO [Listener at localhost/37435] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 18:15:04,117 DEBUG [Listener at localhost/37435] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 18:15:04,118 INFO [Listener at localhost/37435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:04,119 INFO [Listener at localhost/37435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:04,120 INFO [Listener at localhost/37435] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38039 connecting to ZooKeeper ensemble=127.0.0.1:55505 2023-07-19 18:15:04,124 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:380390x0, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:15:04,125 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38039-0x1017ecb5e400002 connected 2023-07-19 18:15:04,126 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:15:04,126 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:04,127 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:15:04,129 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38039 2023-07-19 18:15:04,129 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38039 2023-07-19 18:15:04,129 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38039 2023-07-19 18:15:04,129 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38039 2023-07-19 18:15:04,130 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38039 2023-07-19 18:15:04,132 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:15:04,132 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:15:04,132 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:15:04,132 INFO [Listener at localhost/37435] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 18:15:04,132 INFO [Listener at localhost/37435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:15:04,132 INFO [Listener at localhost/37435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:15:04,133 INFO [Listener at localhost/37435] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:15:04,133 INFO [Listener at localhost/37435] http.HttpServer(1146): Jetty bound to port 40371 2023-07-19 18:15:04,133 INFO [Listener at localhost/37435] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:15:04,137 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:04,137 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@35eb1866{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:15:04,138 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:04,138 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3de75ac7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:15:04,253 INFO [Listener at localhost/37435] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:15:04,254 INFO [Listener at localhost/37435] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:15:04,254 INFO [Listener at localhost/37435] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:15:04,254 INFO [Listener at localhost/37435] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 18:15:04,255 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:04,256 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4c4e4298{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/java.io.tmpdir/jetty-0_0_0_0-40371-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6065483671449528408/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:04,257 INFO [Listener at localhost/37435] server.AbstractConnector(333): Started ServerConnector@2f107ff7{HTTP/1.1, (http/1.1)}{0.0.0.0:40371} 2023-07-19 18:15:04,257 INFO [Listener at localhost/37435] server.Server(415): Started @37966ms 2023-07-19 18:15:04,269 INFO [Listener at localhost/37435] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:15:04,270 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:04,270 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:04,270 INFO [Listener at localhost/37435] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:15:04,270 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:04,270 INFO [Listener at localhost/37435] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:15:04,270 INFO [Listener at localhost/37435] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:15:04,271 INFO [Listener at localhost/37435] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38691 2023-07-19 18:15:04,271 INFO [Listener at localhost/37435] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 18:15:04,273 DEBUG [Listener at localhost/37435] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 18:15:04,273 INFO [Listener at localhost/37435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:04,274 INFO [Listener at localhost/37435] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:04,275 INFO [Listener at localhost/37435] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38691 connecting to ZooKeeper ensemble=127.0.0.1:55505 2023-07-19 18:15:04,279 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:386910x0, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:15:04,280 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): regionserver:386910x0, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:15:04,281 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38691-0x1017ecb5e400003 connected 2023-07-19 18:15:04,281 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:04,282 DEBUG [Listener at localhost/37435] zookeeper.ZKUtil(164): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:15:04,282 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38691 2023-07-19 18:15:04,282 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38691 2023-07-19 18:15:04,282 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38691 2023-07-19 18:15:04,283 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38691 2023-07-19 18:15:04,283 DEBUG [Listener at localhost/37435] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38691 2023-07-19 18:15:04,285 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:15:04,285 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:15:04,285 INFO [Listener at localhost/37435] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:15:04,286 INFO [Listener at localhost/37435] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 18:15:04,286 INFO [Listener at localhost/37435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:15:04,286 INFO [Listener at localhost/37435] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:15:04,286 INFO [Listener at localhost/37435] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:15:04,286 INFO [Listener at localhost/37435] http.HttpServer(1146): Jetty bound to port 39709 2023-07-19 18:15:04,287 INFO [Listener at localhost/37435] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:15:04,288 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:04,288 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@53ecbb85{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:15:04,288 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:04,288 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7fb867a7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:15:04,423 INFO [Listener at localhost/37435] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:15:04,425 INFO [Listener at localhost/37435] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:15:04,426 INFO [Listener at localhost/37435] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:15:04,426 INFO [Listener at localhost/37435] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 18:15:04,427 INFO [Listener at localhost/37435] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:04,428 INFO [Listener at localhost/37435] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@64b278aa{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/java.io.tmpdir/jetty-0_0_0_0-39709-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8969020993403473476/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:04,429 INFO [Listener at localhost/37435] server.AbstractConnector(333): Started ServerConnector@7596ce29{HTTP/1.1, (http/1.1)}{0.0.0.0:39709} 2023-07-19 18:15:04,429 INFO [Listener at localhost/37435] server.Server(415): Started @38138ms 2023-07-19 18:15:04,431 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:15:04,439 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@49e1e3dd{HTTP/1.1, (http/1.1)}{0.0.0.0:45377} 2023-07-19 18:15:04,440 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @38148ms 2023-07-19 18:15:04,440 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,33827,1689790503765 2023-07-19 18:15:04,442 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 18:15:04,443 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,33827,1689790503765 2023-07-19 18:15:04,444 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:15:04,444 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:15:04,444 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:15:04,445 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:15:04,446 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:04,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 18:15:04,449 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 18:15:04,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,33827,1689790503765 from backup master directory 2023-07-19 18:15:04,450 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,33827,1689790503765 2023-07-19 18:15:04,450 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 18:15:04,450 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:15:04,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,33827,1689790503765 2023-07-19 18:15:04,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/hbase.id with ID: cf2191df-1bd0-4d7b-8de2-23c4d8c3954d 2023-07-19 18:15:04,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:04,500 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:04,514 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2e2695be to 127.0.0.1:55505 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:04,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6085f6d2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:04,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:04,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-19 18:15:04,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:15:04,523 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/data/master/store-tmp 2023-07-19 18:15:04,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:04,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 18:15:04,543 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:04,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:04,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 18:15:04,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:04,544 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:04,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 18:15:04,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/WALs/jenkins-hbase4.apache.org,33827,1689790503765 2023-07-19 18:15:04,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33827%2C1689790503765, suffix=, logDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/WALs/jenkins-hbase4.apache.org,33827,1689790503765, archiveDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/oldWALs, maxLogs=10 2023-07-19 18:15:04,577 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36363,DS-c578ab95-2dac-4564-b255-cb2ec84837a1,DISK] 2023-07-19 18:15:04,579 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36427,DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8,DISK] 2023-07-19 18:15:04,579 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45971,DS-c01882ed-5342-41e5-9b0b-4e48e32377d9,DISK] 2023-07-19 18:15:04,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/WALs/jenkins-hbase4.apache.org,33827,1689790503765/jenkins-hbase4.apache.org%2C33827%2C1689790503765.1689790504548 2023-07-19 18:15:04,590 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36363,DS-c578ab95-2dac-4564-b255-cb2ec84837a1,DISK], DatanodeInfoWithStorage[127.0.0.1:45971,DS-c01882ed-5342-41e5-9b0b-4e48e32377d9,DISK], DatanodeInfoWithStorage[127.0.0.1:36427,DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8,DISK]] 2023-07-19 18:15:04,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:15:04,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:04,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:04,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:04,596 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:04,598 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-19 18:15:04,599 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-19 18:15:04,600 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:04,601 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:04,601 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:04,604 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:04,607 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:04,607 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10834162720, jitterRate=0.009010031819343567}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:04,607 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 18:15:04,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-19 18:15:04,612 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-19 18:15:04,612 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-19 18:15:04,612 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-19 18:15:04,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-19 18:15:04,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-19 18:15:04,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-19 18:15:04,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-19 18:15:04,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-19 18:15:04,616 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-19 18:15:04,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-19 18:15:04,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-19 18:15:04,619 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:04,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-19 18:15:04,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-19 18:15:04,620 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-19 18:15:04,622 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:04,622 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:04,622 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:04,622 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:04,622 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:04,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,33827,1689790503765, sessionid=0x1017ecb5e400000, setting cluster-up flag (Was=false) 2023-07-19 18:15:04,629 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:04,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-19 18:15:04,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33827,1689790503765 2023-07-19 18:15:04,640 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:04,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-19 18:15:04,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33827,1689790503765 2023-07-19 18:15:04,647 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.hbase-snapshot/.tmp 2023-07-19 18:15:04,652 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-19 18:15:04,652 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-19 18:15:04,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-19 18:15:04,656 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:15:04,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-19 18:15:04,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-19 18:15:04,659 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-19 18:15:04,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 18:15:04,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 18:15:04,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 18:15:04,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 18:15:04,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:15:04,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:15:04,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:15:04,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:15:04,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-19 18:15:04,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:15:04,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689790534675 2023-07-19 18:15:04,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-19 18:15:04,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-19 18:15:04,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-19 18:15:04,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-19 18:15:04,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-19 18:15:04,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-19 18:15:04,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,676 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 18:15:04,676 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-19 18:15:04,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-19 18:15:04,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-19 18:15:04,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-19 18:15:04,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-19 18:15:04,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-19 18:15:04,678 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:04,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689790504678,5,FailOnTimeoutGroup] 2023-07-19 18:15:04,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689790504679,5,FailOnTimeoutGroup] 2023-07-19 18:15:04,679 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,679 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-19 18:15:04,679 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,679 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,696 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:04,696 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:04,696 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6 2023-07-19 18:15:04,710 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:04,712 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 18:15:04,713 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/info 2023-07-19 18:15:04,714 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 18:15:04,714 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:04,715 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 18:15:04,716 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/rep_barrier 2023-07-19 18:15:04,716 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 18:15:04,717 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:04,717 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 18:15:04,718 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/table 2023-07-19 18:15:04,719 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 18:15:04,719 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:04,720 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740 2023-07-19 18:15:04,721 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740 2023-07-19 18:15:04,724 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 18:15:04,726 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 18:15:04,728 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:04,729 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11700795040, jitterRate=0.08972145617008209}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 18:15:04,729 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 18:15:04,729 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 18:15:04,729 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 18:15:04,729 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 18:15:04,729 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 18:15:04,729 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 18:15:04,730 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 18:15:04,730 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 18:15:04,731 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 18:15:04,732 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-19 18:15:04,732 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-19 18:15:04,732 INFO [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(951): ClusterId : cf2191df-1bd0-4d7b-8de2-23c4d8c3954d 2023-07-19 18:15:04,732 INFO [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(951): ClusterId : cf2191df-1bd0-4d7b-8de2-23c4d8c3954d 2023-07-19 18:15:04,734 DEBUG [RS:0;jenkins-hbase4:45277] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 18:15:04,732 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(951): ClusterId : cf2191df-1bd0-4d7b-8de2-23c4d8c3954d 2023-07-19 18:15:04,735 DEBUG [RS:2;jenkins-hbase4:38691] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 18:15:04,736 DEBUG [RS:1;jenkins-hbase4:38039] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 18:15:04,737 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-19 18:15:04,739 DEBUG [RS:0;jenkins-hbase4:45277] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 18:15:04,739 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-19 18:15:04,739 DEBUG [RS:0;jenkins-hbase4:45277] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 18:15:04,739 DEBUG [RS:2;jenkins-hbase4:38691] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 18:15:04,739 DEBUG [RS:2;jenkins-hbase4:38691] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 18:15:04,739 DEBUG [RS:1;jenkins-hbase4:38039] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 18:15:04,740 DEBUG [RS:1;jenkins-hbase4:38039] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 18:15:04,743 DEBUG [RS:0;jenkins-hbase4:45277] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 18:15:04,745 DEBUG [RS:2;jenkins-hbase4:38691] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 18:15:04,745 DEBUG [RS:0;jenkins-hbase4:45277] zookeeper.ReadOnlyZKClient(139): Connect 0x03aac91c to 127.0.0.1:55505 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:04,745 DEBUG [RS:1;jenkins-hbase4:38039] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 18:15:04,746 DEBUG [RS:2;jenkins-hbase4:38691] zookeeper.ReadOnlyZKClient(139): Connect 0x4a86aaad to 127.0.0.1:55505 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:04,751 DEBUG [RS:1;jenkins-hbase4:38039] zookeeper.ReadOnlyZKClient(139): Connect 0x115e22e0 to 127.0.0.1:55505 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:04,760 DEBUG [RS:2;jenkins-hbase4:38691] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44415d8d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:04,760 DEBUG [RS:0;jenkins-hbase4:45277] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d6d2719, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:04,761 DEBUG [RS:1;jenkins-hbase4:38039] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ba64990, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:04,761 DEBUG [RS:2;jenkins-hbase4:38691] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@57c57b6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:15:04,761 DEBUG [RS:0;jenkins-hbase4:45277] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c502d22, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:15:04,761 DEBUG [RS:1;jenkins-hbase4:38039] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ea4c480, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:15:04,771 DEBUG [RS:2;jenkins-hbase4:38691] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:38691 2023-07-19 18:15:04,771 DEBUG [RS:0;jenkins-hbase4:45277] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:45277 2023-07-19 18:15:04,772 INFO [RS:2;jenkins-hbase4:38691] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 18:15:04,772 INFO [RS:0;jenkins-hbase4:45277] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 18:15:04,772 INFO [RS:0;jenkins-hbase4:45277] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 18:15:04,772 INFO [RS:2;jenkins-hbase4:38691] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 18:15:04,772 DEBUG [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 18:15:04,772 DEBUG [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 18:15:04,773 INFO [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33827,1689790503765 with isa=jenkins-hbase4.apache.org/172.31.14.131:45277, startcode=1689790503944 2023-07-19 18:15:04,773 INFO [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33827,1689790503765 with isa=jenkins-hbase4.apache.org/172.31.14.131:38691, startcode=1689790504269 2023-07-19 18:15:04,773 DEBUG [RS:2;jenkins-hbase4:38691] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 18:15:04,773 DEBUG [RS:1;jenkins-hbase4:38039] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:38039 2023-07-19 18:15:04,773 DEBUG [RS:0;jenkins-hbase4:45277] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 18:15:04,773 INFO [RS:1;jenkins-hbase4:38039] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 18:15:04,773 INFO [RS:1;jenkins-hbase4:38039] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 18:15:04,773 DEBUG [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 18:15:04,774 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33827,1689790503765 with isa=jenkins-hbase4.apache.org/172.31.14.131:38039, startcode=1689790504113 2023-07-19 18:15:04,774 DEBUG [RS:1;jenkins-hbase4:38039] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 18:15:04,775 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41245, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 18:15:04,777 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33827] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:04,777 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:15:04,777 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-19 18:15:04,778 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45441, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 18:15:04,778 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59713, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 18:15:04,778 DEBUG [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6 2023-07-19 18:15:04,778 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33827] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:04,778 DEBUG [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39265 2023-07-19 18:15:04,778 DEBUG [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45437 2023-07-19 18:15:04,778 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:15:04,778 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33827] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:04,778 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-19 18:15:04,778 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:15:04,778 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-19 18:15:04,778 DEBUG [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6 2023-07-19 18:15:04,778 DEBUG [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39265 2023-07-19 18:15:04,778 DEBUG [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45437 2023-07-19 18:15:04,779 DEBUG [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6 2023-07-19 18:15:04,779 DEBUG [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39265 2023-07-19 18:15:04,779 DEBUG [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45437 2023-07-19 18:15:04,779 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:04,784 DEBUG [RS:0;jenkins-hbase4:45277] zookeeper.ZKUtil(162): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:04,784 WARN [RS:0;jenkins-hbase4:45277] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:15:04,784 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45277,1689790503944] 2023-07-19 18:15:04,784 INFO [RS:0;jenkins-hbase4:45277] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:15:04,784 DEBUG [RS:1;jenkins-hbase4:38039] zookeeper.ZKUtil(162): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:04,784 DEBUG [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/WALs/jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:04,784 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38039,1689790504113] 2023-07-19 18:15:04,784 WARN [RS:1;jenkins-hbase4:38039] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:15:04,784 DEBUG [RS:2;jenkins-hbase4:38691] zookeeper.ZKUtil(162): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:04,785 INFO [RS:1;jenkins-hbase4:38039] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:15:04,785 WARN [RS:2;jenkins-hbase4:38691] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:15:04,784 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38691,1689790504269] 2023-07-19 18:15:04,785 INFO [RS:2;jenkins-hbase4:38691] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:15:04,785 DEBUG [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/WALs/jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:04,785 DEBUG [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/WALs/jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:04,793 DEBUG [RS:0;jenkins-hbase4:45277] zookeeper.ZKUtil(162): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:04,793 DEBUG [RS:2;jenkins-hbase4:38691] zookeeper.ZKUtil(162): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:04,793 DEBUG [RS:1;jenkins-hbase4:38039] zookeeper.ZKUtil(162): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:04,793 DEBUG [RS:0;jenkins-hbase4:45277] zookeeper.ZKUtil(162): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:04,793 DEBUG [RS:2;jenkins-hbase4:38691] zookeeper.ZKUtil(162): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:04,793 DEBUG [RS:1;jenkins-hbase4:38039] zookeeper.ZKUtil(162): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:04,794 DEBUG [RS:0;jenkins-hbase4:45277] zookeeper.ZKUtil(162): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:04,794 DEBUG [RS:2;jenkins-hbase4:38691] zookeeper.ZKUtil(162): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:04,794 DEBUG [RS:1;jenkins-hbase4:38039] zookeeper.ZKUtil(162): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:04,795 DEBUG [RS:0;jenkins-hbase4:45277] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 18:15:04,795 DEBUG [RS:2;jenkins-hbase4:38691] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 18:15:04,795 DEBUG [RS:1;jenkins-hbase4:38039] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 18:15:04,795 INFO [RS:0;jenkins-hbase4:45277] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 18:15:04,795 INFO [RS:2;jenkins-hbase4:38691] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 18:15:04,795 INFO [RS:1;jenkins-hbase4:38039] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 18:15:04,796 INFO [RS:0;jenkins-hbase4:45277] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 18:15:04,796 INFO [RS:0;jenkins-hbase4:45277] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 18:15:04,796 INFO [RS:0;jenkins-hbase4:45277] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,797 INFO [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 18:15:04,798 INFO [RS:1;jenkins-hbase4:38039] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 18:15:04,798 INFO [RS:1;jenkins-hbase4:38039] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 18:15:04,798 INFO [RS:1;jenkins-hbase4:38039] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,798 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 18:15:04,799 INFO [RS:0;jenkins-hbase4:45277] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,800 DEBUG [RS:0;jenkins-hbase4:45277] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,800 DEBUG [RS:0;jenkins-hbase4:45277] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,800 DEBUG [RS:0;jenkins-hbase4:45277] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,800 DEBUG [RS:0;jenkins-hbase4:45277] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,800 DEBUG [RS:0;jenkins-hbase4:45277] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,800 DEBUG [RS:0;jenkins-hbase4:45277] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:15:04,800 DEBUG [RS:0;jenkins-hbase4:45277] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,800 DEBUG [RS:0;jenkins-hbase4:45277] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,800 DEBUG [RS:0;jenkins-hbase4:45277] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,800 DEBUG [RS:0;jenkins-hbase4:45277] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,800 INFO [RS:2;jenkins-hbase4:38691] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 18:15:04,801 INFO [RS:1;jenkins-hbase4:38039] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,801 DEBUG [RS:1;jenkins-hbase4:38039] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,801 INFO [RS:2;jenkins-hbase4:38691] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 18:15:04,801 DEBUG [RS:1;jenkins-hbase4:38039] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,802 INFO [RS:2;jenkins-hbase4:38691] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,802 DEBUG [RS:1;jenkins-hbase4:38039] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,802 INFO [RS:0;jenkins-hbase4:45277] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,802 DEBUG [RS:1;jenkins-hbase4:38039] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,802 INFO [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 18:15:04,802 DEBUG [RS:1;jenkins-hbase4:38039] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,802 INFO [RS:0;jenkins-hbase4:45277] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,803 DEBUG [RS:1;jenkins-hbase4:38039] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:15:04,803 INFO [RS:0;jenkins-hbase4:45277] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,803 DEBUG [RS:1;jenkins-hbase4:38039] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,803 INFO [RS:0;jenkins-hbase4:45277] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,803 DEBUG [RS:1;jenkins-hbase4:38039] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,803 DEBUG [RS:1;jenkins-hbase4:38039] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,804 DEBUG [RS:1;jenkins-hbase4:38039] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,804 INFO [RS:2;jenkins-hbase4:38691] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,805 INFO [RS:1;jenkins-hbase4:38039] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,805 DEBUG [RS:2;jenkins-hbase4:38691] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,805 INFO [RS:1;jenkins-hbase4:38039] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,805 DEBUG [RS:2;jenkins-hbase4:38691] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,805 INFO [RS:1;jenkins-hbase4:38039] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,805 DEBUG [RS:2;jenkins-hbase4:38691] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,805 INFO [RS:1;jenkins-hbase4:38039] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,805 DEBUG [RS:2;jenkins-hbase4:38691] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,806 DEBUG [RS:2;jenkins-hbase4:38691] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,806 DEBUG [RS:2;jenkins-hbase4:38691] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:15:04,806 DEBUG [RS:2;jenkins-hbase4:38691] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,806 DEBUG [RS:2;jenkins-hbase4:38691] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,806 DEBUG [RS:2;jenkins-hbase4:38691] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,806 DEBUG [RS:2;jenkins-hbase4:38691] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:04,811 INFO [RS:2;jenkins-hbase4:38691] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,811 INFO [RS:2;jenkins-hbase4:38691] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,811 INFO [RS:2;jenkins-hbase4:38691] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,811 INFO [RS:2;jenkins-hbase4:38691] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,817 INFO [RS:0;jenkins-hbase4:45277] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 18:15:04,817 INFO [RS:0;jenkins-hbase4:45277] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45277,1689790503944-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,820 INFO [RS:1;jenkins-hbase4:38039] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 18:15:04,820 INFO [RS:1;jenkins-hbase4:38039] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38039,1689790504113-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,825 INFO [RS:2;jenkins-hbase4:38691] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 18:15:04,826 INFO [RS:2;jenkins-hbase4:38691] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38691,1689790504269-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,831 INFO [RS:1;jenkins-hbase4:38039] regionserver.Replication(203): jenkins-hbase4.apache.org,38039,1689790504113 started 2023-07-19 18:15:04,831 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38039,1689790504113, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38039, sessionid=0x1017ecb5e400002 2023-07-19 18:15:04,831 DEBUG [RS:1;jenkins-hbase4:38039] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 18:15:04,831 DEBUG [RS:1;jenkins-hbase4:38039] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:04,831 DEBUG [RS:1;jenkins-hbase4:38039] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38039,1689790504113' 2023-07-19 18:15:04,831 DEBUG [RS:1;jenkins-hbase4:38039] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 18:15:04,832 DEBUG [RS:1;jenkins-hbase4:38039] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 18:15:04,832 DEBUG [RS:1;jenkins-hbase4:38039] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 18:15:04,832 DEBUG [RS:1;jenkins-hbase4:38039] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 18:15:04,832 DEBUG [RS:1;jenkins-hbase4:38039] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:04,832 DEBUG [RS:1;jenkins-hbase4:38039] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38039,1689790504113' 2023-07-19 18:15:04,832 DEBUG [RS:1;jenkins-hbase4:38039] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 18:15:04,833 DEBUG [RS:1;jenkins-hbase4:38039] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 18:15:04,833 INFO [RS:0;jenkins-hbase4:45277] regionserver.Replication(203): jenkins-hbase4.apache.org,45277,1689790503944 started 2023-07-19 18:15:04,833 INFO [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45277,1689790503944, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45277, sessionid=0x1017ecb5e400001 2023-07-19 18:15:04,833 DEBUG [RS:1;jenkins-hbase4:38039] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 18:15:04,833 DEBUG [RS:0;jenkins-hbase4:45277] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 18:15:04,833 INFO [RS:1;jenkins-hbase4:38039] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-19 18:15:04,833 DEBUG [RS:0;jenkins-hbase4:45277] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:04,833 DEBUG [RS:0;jenkins-hbase4:45277] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45277,1689790503944' 2023-07-19 18:15:04,833 DEBUG [RS:0;jenkins-hbase4:45277] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 18:15:04,834 DEBUG [RS:0;jenkins-hbase4:45277] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 18:15:04,834 DEBUG [RS:0;jenkins-hbase4:45277] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 18:15:04,834 DEBUG [RS:0;jenkins-hbase4:45277] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 18:15:04,834 DEBUG [RS:0;jenkins-hbase4:45277] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:04,834 DEBUG [RS:0;jenkins-hbase4:45277] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45277,1689790503944' 2023-07-19 18:15:04,834 DEBUG [RS:0;jenkins-hbase4:45277] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 18:15:04,834 DEBUG [RS:0;jenkins-hbase4:45277] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 18:15:04,835 DEBUG [RS:0;jenkins-hbase4:45277] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 18:15:04,835 INFO [RS:0;jenkins-hbase4:45277] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-19 18:15:04,835 INFO [RS:1;jenkins-hbase4:38039] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,835 INFO [RS:0;jenkins-hbase4:45277] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,836 DEBUG [RS:1;jenkins-hbase4:38039] zookeeper.ZKUtil(398): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-19 18:15:04,836 DEBUG [RS:0;jenkins-hbase4:45277] zookeeper.ZKUtil(398): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-19 18:15:04,836 INFO [RS:0;jenkins-hbase4:45277] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-19 18:15:04,836 INFO [RS:1;jenkins-hbase4:38039] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-19 18:15:04,836 INFO [RS:1;jenkins-hbase4:38039] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,836 INFO [RS:0;jenkins-hbase4:45277] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,837 INFO [RS:0;jenkins-hbase4:45277] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,837 INFO [RS:1;jenkins-hbase4:38039] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,837 INFO [RS:2;jenkins-hbase4:38691] regionserver.Replication(203): jenkins-hbase4.apache.org,38691,1689790504269 started 2023-07-19 18:15:04,838 INFO [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38691,1689790504269, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38691, sessionid=0x1017ecb5e400003 2023-07-19 18:15:04,838 DEBUG [RS:2;jenkins-hbase4:38691] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 18:15:04,838 DEBUG [RS:2;jenkins-hbase4:38691] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:04,838 DEBUG [RS:2;jenkins-hbase4:38691] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38691,1689790504269' 2023-07-19 18:15:04,838 DEBUG [RS:2;jenkins-hbase4:38691] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 18:15:04,838 DEBUG [RS:2;jenkins-hbase4:38691] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 18:15:04,838 DEBUG [RS:2;jenkins-hbase4:38691] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 18:15:04,838 DEBUG [RS:2;jenkins-hbase4:38691] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 18:15:04,838 DEBUG [RS:2;jenkins-hbase4:38691] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:04,838 DEBUG [RS:2;jenkins-hbase4:38691] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38691,1689790504269' 2023-07-19 18:15:04,839 DEBUG [RS:2;jenkins-hbase4:38691] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 18:15:04,839 DEBUG [RS:2;jenkins-hbase4:38691] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 18:15:04,839 DEBUG [RS:2;jenkins-hbase4:38691] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 18:15:04,839 INFO [RS:2;jenkins-hbase4:38691] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-19 18:15:04,839 INFO [RS:2;jenkins-hbase4:38691] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,840 DEBUG [RS:2;jenkins-hbase4:38691] zookeeper.ZKUtil(398): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-19 18:15:04,840 INFO [RS:2;jenkins-hbase4:38691] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-19 18:15:04,840 INFO [RS:2;jenkins-hbase4:38691] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,840 INFO [RS:2;jenkins-hbase4:38691] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:04,889 DEBUG [jenkins-hbase4:33827] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-19 18:15:04,890 DEBUG [jenkins-hbase4:33827] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:15:04,890 DEBUG [jenkins-hbase4:33827] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:15:04,890 DEBUG [jenkins-hbase4:33827] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:15:04,890 DEBUG [jenkins-hbase4:33827] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:15:04,890 DEBUG [jenkins-hbase4:33827] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:15:04,891 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38039,1689790504113, state=OPENING 2023-07-19 18:15:04,893 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-19 18:15:04,894 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:04,896 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 18:15:04,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38039,1689790504113}] 2023-07-19 18:15:04,940 INFO [RS:0;jenkins-hbase4:45277] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45277%2C1689790503944, suffix=, logDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/WALs/jenkins-hbase4.apache.org,45277,1689790503944, archiveDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/oldWALs, maxLogs=32 2023-07-19 18:15:04,940 INFO [RS:1;jenkins-hbase4:38039] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38039%2C1689790504113, suffix=, logDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/WALs/jenkins-hbase4.apache.org,38039,1689790504113, archiveDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/oldWALs, maxLogs=32 2023-07-19 18:15:04,942 INFO [RS:2;jenkins-hbase4:38691] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38691%2C1689790504269, suffix=, logDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/WALs/jenkins-hbase4.apache.org,38691,1689790504269, archiveDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/oldWALs, maxLogs=32 2023-07-19 18:15:04,966 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36427,DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8,DISK] 2023-07-19 18:15:04,966 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36363,DS-c578ab95-2dac-4564-b255-cb2ec84837a1,DISK] 2023-07-19 18:15:04,966 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36363,DS-c578ab95-2dac-4564-b255-cb2ec84837a1,DISK] 2023-07-19 18:15:04,966 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45971,DS-c01882ed-5342-41e5-9b0b-4e48e32377d9,DISK] 2023-07-19 18:15:04,967 WARN [ReadOnlyZKClient-127.0.0.1:55505@0x2e2695be] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-19 18:15:04,967 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45971,DS-c01882ed-5342-41e5-9b0b-4e48e32377d9,DISK] 2023-07-19 18:15:04,967 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36427,DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8,DISK] 2023-07-19 18:15:04,967 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33827,1689790503765] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:15:04,971 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34032, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:15:04,976 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38039] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:34032 deadline: 1689790564971, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:04,979 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36363,DS-c578ab95-2dac-4564-b255-cb2ec84837a1,DISK] 2023-07-19 18:15:04,979 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36427,DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8,DISK] 2023-07-19 18:15:04,979 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45971,DS-c01882ed-5342-41e5-9b0b-4e48e32377d9,DISK] 2023-07-19 18:15:04,980 INFO [RS:1;jenkins-hbase4:38039] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/WALs/jenkins-hbase4.apache.org,38039,1689790504113/jenkins-hbase4.apache.org%2C38039%2C1689790504113.1689790504946 2023-07-19 18:15:04,980 INFO [RS:0;jenkins-hbase4:45277] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/WALs/jenkins-hbase4.apache.org,45277,1689790503944/jenkins-hbase4.apache.org%2C45277%2C1689790503944.1689790504945 2023-07-19 18:15:04,982 DEBUG [RS:1;jenkins-hbase4:38039] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36363,DS-c578ab95-2dac-4564-b255-cb2ec84837a1,DISK], DatanodeInfoWithStorage[127.0.0.1:36427,DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8,DISK], DatanodeInfoWithStorage[127.0.0.1:45971,DS-c01882ed-5342-41e5-9b0b-4e48e32377d9,DISK]] 2023-07-19 18:15:04,983 DEBUG [RS:0;jenkins-hbase4:45277] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45971,DS-c01882ed-5342-41e5-9b0b-4e48e32377d9,DISK], DatanodeInfoWithStorage[127.0.0.1:36363,DS-c578ab95-2dac-4564-b255-cb2ec84837a1,DISK], DatanodeInfoWithStorage[127.0.0.1:36427,DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8,DISK]] 2023-07-19 18:15:04,983 INFO [RS:2;jenkins-hbase4:38691] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/WALs/jenkins-hbase4.apache.org,38691,1689790504269/jenkins-hbase4.apache.org%2C38691%2C1689790504269.1689790504946 2023-07-19 18:15:04,984 DEBUG [RS:2;jenkins-hbase4:38691] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36363,DS-c578ab95-2dac-4564-b255-cb2ec84837a1,DISK], DatanodeInfoWithStorage[127.0.0.1:36427,DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8,DISK], DatanodeInfoWithStorage[127.0.0.1:45971,DS-c01882ed-5342-41e5-9b0b-4e48e32377d9,DISK]] 2023-07-19 18:15:05,051 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:05,053 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:15:05,055 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34034, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:15:05,063 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 18:15:05,063 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:15:05,065 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38039%2C1689790504113.meta, suffix=.meta, logDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/WALs/jenkins-hbase4.apache.org,38039,1689790504113, archiveDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/oldWALs, maxLogs=32 2023-07-19 18:15:05,084 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36363,DS-c578ab95-2dac-4564-b255-cb2ec84837a1,DISK] 2023-07-19 18:15:05,084 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45971,DS-c01882ed-5342-41e5-9b0b-4e48e32377d9,DISK] 2023-07-19 18:15:05,088 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36427,DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8,DISK] 2023-07-19 18:15:05,092 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/WALs/jenkins-hbase4.apache.org,38039,1689790504113/jenkins-hbase4.apache.org%2C38039%2C1689790504113.meta.1689790505066.meta 2023-07-19 18:15:05,093 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36363,DS-c578ab95-2dac-4564-b255-cb2ec84837a1,DISK], DatanodeInfoWithStorage[127.0.0.1:45971,DS-c01882ed-5342-41e5-9b0b-4e48e32377d9,DISK], DatanodeInfoWithStorage[127.0.0.1:36427,DS-1afcffcf-0de0-408b-9729-31f8bf5e79c8,DISK]] 2023-07-19 18:15:05,093 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:15:05,093 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 18:15:05,093 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 18:15:05,093 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 18:15:05,093 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 18:15:05,093 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:05,093 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 18:15:05,093 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 18:15:05,098 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 18:15:05,099 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/info 2023-07-19 18:15:05,099 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/info 2023-07-19 18:15:05,100 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 18:15:05,101 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:05,101 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 18:15:05,102 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/rep_barrier 2023-07-19 18:15:05,102 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/rep_barrier 2023-07-19 18:15:05,102 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 18:15:05,103 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:05,103 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 18:15:05,104 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/table 2023-07-19 18:15:05,104 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/table 2023-07-19 18:15:05,104 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 18:15:05,105 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:05,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740 2023-07-19 18:15:05,107 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740 2023-07-19 18:15:05,109 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 18:15:05,110 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 18:15:05,111 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9952787040, jitterRate=-0.07307447493076324}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 18:15:05,111 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 18:15:05,112 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689790505051 2023-07-19 18:15:05,116 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 18:15:05,117 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 18:15:05,118 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38039,1689790504113, state=OPEN 2023-07-19 18:15:05,120 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 18:15:05,121 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 18:15:05,122 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-19 18:15:05,122 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38039,1689790504113 in 226 msec 2023-07-19 18:15:05,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-19 18:15:05,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 390 msec 2023-07-19 18:15:05,125 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 466 msec 2023-07-19 18:15:05,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689790505125, completionTime=-1 2023-07-19 18:15:05,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-19 18:15:05,125 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-19 18:15:05,129 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-19 18:15:05,129 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689790565129 2023-07-19 18:15:05,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689790625130 2023-07-19 18:15:05,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-19 18:15:05,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33827,1689790503765-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:05,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33827,1689790503765-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:05,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33827,1689790503765-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:05,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:33827, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:05,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:05,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-19 18:15:05,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:05,137 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-19 18:15:05,139 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-19 18:15:05,139 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:15:05,140 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:15:05,141 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/namespace/47cade7ff99f47216401129cba97f5af 2023-07-19 18:15:05,142 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/namespace/47cade7ff99f47216401129cba97f5af empty. 2023-07-19 18:15:05,142 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/namespace/47cade7ff99f47216401129cba97f5af 2023-07-19 18:15:05,143 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-19 18:15:05,162 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:05,163 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 47cade7ff99f47216401129cba97f5af, NAME => 'hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp 2023-07-19 18:15:05,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:05,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 47cade7ff99f47216401129cba97f5af, disabling compactions & flushes 2023-07-19 18:15:05,178 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:05,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:05,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. after waiting 0 ms 2023-07-19 18:15:05,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:05,178 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:05,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 47cade7ff99f47216401129cba97f5af: 2023-07-19 18:15:05,181 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:15:05,183 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790505182"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790505182"}]},"ts":"1689790505182"} 2023-07-19 18:15:05,186 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:15:05,186 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:15:05,187 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790505187"}]},"ts":"1689790505187"} 2023-07-19 18:15:05,188 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-19 18:15:05,194 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:15:05,194 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:15:05,194 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:15:05,194 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:15:05,194 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:15:05,194 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=47cade7ff99f47216401129cba97f5af, ASSIGN}] 2023-07-19 18:15:05,199 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=47cade7ff99f47216401129cba97f5af, ASSIGN 2023-07-19 18:15:05,199 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=47cade7ff99f47216401129cba97f5af, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38039,1689790504113; forceNewPlan=false, retain=false 2023-07-19 18:15:05,284 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33827,1689790503765] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:05,286 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33827,1689790503765] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-19 18:15:05,288 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:15:05,289 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:15:05,290 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0 2023-07-19 18:15:05,291 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0 empty. 2023-07-19 18:15:05,291 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0 2023-07-19 18:15:05,292 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-19 18:15:05,304 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:05,305 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => fb86e4485da6b70be23465c48bc14fd0, NAME => 'hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp 2023-07-19 18:15:05,319 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:05,320 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing fb86e4485da6b70be23465c48bc14fd0, disabling compactions & flushes 2023-07-19 18:15:05,320 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:05,320 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:05,320 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. after waiting 0 ms 2023-07-19 18:15:05,320 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:05,320 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:05,320 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for fb86e4485da6b70be23465c48bc14fd0: 2023-07-19 18:15:05,322 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:15:05,323 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790505323"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790505323"}]},"ts":"1689790505323"} 2023-07-19 18:15:05,325 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:15:05,325 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:15:05,325 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790505325"}]},"ts":"1689790505325"} 2023-07-19 18:15:05,327 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-19 18:15:05,330 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:15:05,330 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:15:05,330 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:15:05,330 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:15:05,330 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:15:05,330 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=fb86e4485da6b70be23465c48bc14fd0, ASSIGN}] 2023-07-19 18:15:05,331 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=fb86e4485da6b70be23465c48bc14fd0, ASSIGN 2023-07-19 18:15:05,332 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=fb86e4485da6b70be23465c48bc14fd0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38039,1689790504113; forceNewPlan=false, retain=false 2023-07-19 18:15:05,332 INFO [jenkins-hbase4:33827] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-19 18:15:05,334 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=47cade7ff99f47216401129cba97f5af, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:05,334 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790505334"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790505334"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790505334"}]},"ts":"1689790505334"} 2023-07-19 18:15:05,334 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=fb86e4485da6b70be23465c48bc14fd0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:05,334 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790505334"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790505334"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790505334"}]},"ts":"1689790505334"} 2023-07-19 18:15:05,335 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 47cade7ff99f47216401129cba97f5af, server=jenkins-hbase4.apache.org,38039,1689790504113}] 2023-07-19 18:15:05,335 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure fb86e4485da6b70be23465c48bc14fd0, server=jenkins-hbase4.apache.org,38039,1689790504113}] 2023-07-19 18:15:05,499 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:05,499 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fb86e4485da6b70be23465c48bc14fd0, NAME => 'hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:15:05,499 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 18:15:05,499 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. service=MultiRowMutationService 2023-07-19 18:15:05,499 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 18:15:05,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup fb86e4485da6b70be23465c48bc14fd0 2023-07-19 18:15:05,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:05,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fb86e4485da6b70be23465c48bc14fd0 2023-07-19 18:15:05,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fb86e4485da6b70be23465c48bc14fd0 2023-07-19 18:15:05,502 INFO [StoreOpener-fb86e4485da6b70be23465c48bc14fd0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region fb86e4485da6b70be23465c48bc14fd0 2023-07-19 18:15:05,504 DEBUG [StoreOpener-fb86e4485da6b70be23465c48bc14fd0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0/m 2023-07-19 18:15:05,504 DEBUG [StoreOpener-fb86e4485da6b70be23465c48bc14fd0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0/m 2023-07-19 18:15:05,504 INFO [StoreOpener-fb86e4485da6b70be23465c48bc14fd0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fb86e4485da6b70be23465c48bc14fd0 columnFamilyName m 2023-07-19 18:15:05,505 INFO [StoreOpener-fb86e4485da6b70be23465c48bc14fd0-1] regionserver.HStore(310): Store=fb86e4485da6b70be23465c48bc14fd0/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:05,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0 2023-07-19 18:15:05,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0 2023-07-19 18:15:05,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fb86e4485da6b70be23465c48bc14fd0 2023-07-19 18:15:05,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:05,512 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fb86e4485da6b70be23465c48bc14fd0; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@446e244, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:05,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fb86e4485da6b70be23465c48bc14fd0: 2023-07-19 18:15:05,513 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0., pid=9, masterSystemTime=1689790505492 2023-07-19 18:15:05,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:05,515 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:05,515 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:05,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 47cade7ff99f47216401129cba97f5af, NAME => 'hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:15:05,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 47cade7ff99f47216401129cba97f5af 2023-07-19 18:15:05,516 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=fb86e4485da6b70be23465c48bc14fd0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:05,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:05,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 47cade7ff99f47216401129cba97f5af 2023-07-19 18:15:05,516 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790505516"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790505516"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790505516"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790505516"}]},"ts":"1689790505516"} 2023-07-19 18:15:05,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 47cade7ff99f47216401129cba97f5af 2023-07-19 18:15:05,518 INFO [StoreOpener-47cade7ff99f47216401129cba97f5af-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 47cade7ff99f47216401129cba97f5af 2023-07-19 18:15:05,519 DEBUG [StoreOpener-47cade7ff99f47216401129cba97f5af-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/namespace/47cade7ff99f47216401129cba97f5af/info 2023-07-19 18:15:05,519 DEBUG [StoreOpener-47cade7ff99f47216401129cba97f5af-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/namespace/47cade7ff99f47216401129cba97f5af/info 2023-07-19 18:15:05,519 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-19 18:15:05,519 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure fb86e4485da6b70be23465c48bc14fd0, server=jenkins-hbase4.apache.org,38039,1689790504113 in 183 msec 2023-07-19 18:15:05,520 INFO [StoreOpener-47cade7ff99f47216401129cba97f5af-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 47cade7ff99f47216401129cba97f5af columnFamilyName info 2023-07-19 18:15:05,520 INFO [StoreOpener-47cade7ff99f47216401129cba97f5af-1] regionserver.HStore(310): Store=47cade7ff99f47216401129cba97f5af/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:05,521 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-19 18:15:05,521 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=fb86e4485da6b70be23465c48bc14fd0, ASSIGN in 189 msec 2023-07-19 18:15:05,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/namespace/47cade7ff99f47216401129cba97f5af 2023-07-19 18:15:05,522 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/namespace/47cade7ff99f47216401129cba97f5af 2023-07-19 18:15:05,522 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:15:05,522 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790505522"}]},"ts":"1689790505522"} 2023-07-19 18:15:05,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 47cade7ff99f47216401129cba97f5af 2023-07-19 18:15:05,527 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/namespace/47cade7ff99f47216401129cba97f5af/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:05,528 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 47cade7ff99f47216401129cba97f5af; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12029057440, jitterRate=0.1202932745218277}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:05,528 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 47cade7ff99f47216401129cba97f5af: 2023-07-19 18:15:05,529 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af., pid=8, masterSystemTime=1689790505492 2023-07-19 18:15:05,529 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-19 18:15:05,530 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:05,530 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:05,530 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=47cade7ff99f47216401129cba97f5af, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:05,531 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790505530"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790505530"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790505530"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790505530"}]},"ts":"1689790505530"} 2023-07-19 18:15:05,540 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:15:05,544 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-19 18:15:05,544 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 47cade7ff99f47216401129cba97f5af, server=jenkins-hbase4.apache.org,38039,1689790504113 in 200 msec 2023-07-19 18:15:05,544 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 256 msec 2023-07-19 18:15:05,546 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-19 18:15:05,546 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=47cade7ff99f47216401129cba97f5af, ASSIGN in 350 msec 2023-07-19 18:15:05,547 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:15:05,547 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790505547"}]},"ts":"1689790505547"} 2023-07-19 18:15:05,548 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-19 18:15:05,550 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:15:05,552 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 414 msec 2023-07-19 18:15:05,590 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-19 18:15:05,591 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-19 18:15:05,595 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:05,595 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:05,597 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 18:15:05,599 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33827,1689790503765] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-19 18:15:05,638 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-19 18:15:05,640 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:15:05,640 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:05,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-19 18:15:05,653 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:15:05,656 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-19 18:15:05,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-19 18:15:05,674 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:15:05,677 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-07-19 18:15:05,691 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-19 18:15:05,694 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-19 18:15:05,694 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.244sec 2023-07-19 18:15:05,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-19 18:15:05,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:05,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-19 18:15:05,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-19 18:15:05,702 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:15:05,704 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-19 18:15:05,705 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:15:05,707 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/quota/fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:05,707 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/quota/fc3bcf08d6bff82f0c5a86c17edbb657 empty. 2023-07-19 18:15:05,708 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/quota/fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:05,708 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-19 18:15:05,710 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-19 18:15:05,710 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-19 18:15:05,713 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:05,713 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:05,713 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-19 18:15:05,713 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-19 18:15:05,713 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33827,1689790503765-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-19 18:15:05,714 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33827,1689790503765-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-19 18:15:05,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-19 18:15:05,723 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:05,724 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => fc3bcf08d6bff82f0c5a86c17edbb657, NAME => 'hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp 2023-07-19 18:15:05,732 DEBUG [Listener at localhost/37435] zookeeper.ReadOnlyZKClient(139): Connect 0x7c5dfa12 to 127.0.0.1:55505 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:05,743 DEBUG [Listener at localhost/37435] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b7448e1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:05,747 DEBUG [hconnection-0x24130d34-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:15:05,749 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:05,749 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing fc3bcf08d6bff82f0c5a86c17edbb657, disabling compactions & flushes 2023-07-19 18:15:05,749 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:05,749 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:05,749 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. after waiting 0 ms 2023-07-19 18:15:05,749 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:05,749 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:05,749 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for fc3bcf08d6bff82f0c5a86c17edbb657: 2023-07-19 18:15:05,750 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34040, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:15:05,752 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:15:05,753 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689790505753"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790505753"}]},"ts":"1689790505753"} 2023-07-19 18:15:05,753 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,33827,1689790503765 2023-07-19 18:15:05,754 INFO [Listener at localhost/37435] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:05,757 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:15:05,758 DEBUG [Listener at localhost/37435] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-19 18:15:05,760 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36940, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-19 18:15:05,763 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:15:05,763 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790505763"}]},"ts":"1689790505763"} 2023-07-19 18:15:05,764 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-19 18:15:05,764 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:05,764 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-19 18:15:05,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-19 18:15:05,765 DEBUG [Listener at localhost/37435] zookeeper.ReadOnlyZKClient(139): Connect 0x49b7c08e to 127.0.0.1:55505 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:05,770 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:15:05,771 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:15:05,771 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:15:05,771 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:15:05,771 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:15:05,771 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=fc3bcf08d6bff82f0c5a86c17edbb657, ASSIGN}] 2023-07-19 18:15:05,773 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=fc3bcf08d6bff82f0c5a86c17edbb657, ASSIGN 2023-07-19 18:15:05,774 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=fc3bcf08d6bff82f0c5a86c17edbb657, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45277,1689790503944; forceNewPlan=false, retain=false 2023-07-19 18:15:05,774 DEBUG [Listener at localhost/37435] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@75f10e21, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:05,775 INFO [Listener at localhost/37435] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:55505 2023-07-19 18:15:05,780 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:15:05,781 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017ecb5e40000a connected 2023-07-19 18:15:05,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-19 18:15:05,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-19 18:15:05,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-19 18:15:05,800 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:15:05,804 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 17 msec 2023-07-19 18:15:05,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-19 18:15:05,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:05,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-19 18:15:05,904 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:15:05,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-19 18:15:05,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-19 18:15:05,907 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:05,908 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 18:15:05,911 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:15:05,913 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:05,914 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c empty. 2023-07-19 18:15:05,915 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:05,915 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-19 18:15:05,924 INFO [jenkins-hbase4:33827] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 18:15:05,926 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=fc3bcf08d6bff82f0c5a86c17edbb657, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:05,926 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689790505926"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790505926"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790505926"}]},"ts":"1689790505926"} 2023-07-19 18:15:05,927 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure fc3bcf08d6bff82f0c5a86c17edbb657, server=jenkins-hbase4.apache.org,45277,1689790503944}] 2023-07-19 18:15:05,934 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:05,935 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => fc387cdbe5b949b885e0bb11a247eb9c, NAME => 'np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp 2023-07-19 18:15:06,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-19 18:15:06,010 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:06,011 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing fc387cdbe5b949b885e0bb11a247eb9c, disabling compactions & flushes 2023-07-19 18:15:06,011 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. 2023-07-19 18:15:06,011 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. 2023-07-19 18:15:06,011 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. after waiting 0 ms 2023-07-19 18:15:06,011 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. 2023-07-19 18:15:06,011 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. 2023-07-19 18:15:06,011 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for fc387cdbe5b949b885e0bb11a247eb9c: 2023-07-19 18:15:06,014 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:15:06,015 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689790506015"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790506015"}]},"ts":"1689790506015"} 2023-07-19 18:15:06,016 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:15:06,018 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:15:06,018 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790506018"}]},"ts":"1689790506018"} 2023-07-19 18:15:06,020 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-19 18:15:06,024 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:15:06,025 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:15:06,025 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:15:06,025 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:15:06,025 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:15:06,025 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=fc387cdbe5b949b885e0bb11a247eb9c, ASSIGN}] 2023-07-19 18:15:06,026 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=fc387cdbe5b949b885e0bb11a247eb9c, ASSIGN 2023-07-19 18:15:06,027 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=fc387cdbe5b949b885e0bb11a247eb9c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38691,1689790504269; forceNewPlan=false, retain=false 2023-07-19 18:15:06,081 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:06,081 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:15:06,083 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54450, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:15:06,089 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:06,089 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fc3bcf08d6bff82f0c5a86c17edbb657, NAME => 'hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:15:06,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:06,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:06,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:06,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:06,092 INFO [StoreOpener-fc3bcf08d6bff82f0c5a86c17edbb657-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:06,094 DEBUG [StoreOpener-fc3bcf08d6bff82f0c5a86c17edbb657-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/quota/fc3bcf08d6bff82f0c5a86c17edbb657/q 2023-07-19 18:15:06,094 DEBUG [StoreOpener-fc3bcf08d6bff82f0c5a86c17edbb657-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/quota/fc3bcf08d6bff82f0c5a86c17edbb657/q 2023-07-19 18:15:06,095 INFO [StoreOpener-fc3bcf08d6bff82f0c5a86c17edbb657-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fc3bcf08d6bff82f0c5a86c17edbb657 columnFamilyName q 2023-07-19 18:15:06,096 INFO [StoreOpener-fc3bcf08d6bff82f0c5a86c17edbb657-1] regionserver.HStore(310): Store=fc3bcf08d6bff82f0c5a86c17edbb657/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:06,096 INFO [StoreOpener-fc3bcf08d6bff82f0c5a86c17edbb657-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:06,098 DEBUG [StoreOpener-fc3bcf08d6bff82f0c5a86c17edbb657-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/quota/fc3bcf08d6bff82f0c5a86c17edbb657/u 2023-07-19 18:15:06,098 DEBUG [StoreOpener-fc3bcf08d6bff82f0c5a86c17edbb657-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/quota/fc3bcf08d6bff82f0c5a86c17edbb657/u 2023-07-19 18:15:06,098 INFO [StoreOpener-fc3bcf08d6bff82f0c5a86c17edbb657-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fc3bcf08d6bff82f0c5a86c17edbb657 columnFamilyName u 2023-07-19 18:15:06,099 INFO [StoreOpener-fc3bcf08d6bff82f0c5a86c17edbb657-1] regionserver.HStore(310): Store=fc3bcf08d6bff82f0c5a86c17edbb657/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:06,100 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/quota/fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:06,100 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/quota/fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:06,103 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-19 18:15:06,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:06,108 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/quota/fc3bcf08d6bff82f0c5a86c17edbb657/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:06,109 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fc3bcf08d6bff82f0c5a86c17edbb657; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10000016160, jitterRate=-0.0686759203672409}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-19 18:15:06,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fc3bcf08d6bff82f0c5a86c17edbb657: 2023-07-19 18:15:06,110 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657., pid=16, masterSystemTime=1689790506081 2023-07-19 18:15:06,114 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:06,114 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:06,115 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=fc3bcf08d6bff82f0c5a86c17edbb657, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:06,115 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689790506115"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790506115"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790506115"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790506115"}]},"ts":"1689790506115"} 2023-07-19 18:15:06,118 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-19 18:15:06,118 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure fc3bcf08d6bff82f0c5a86c17edbb657, server=jenkins-hbase4.apache.org,45277,1689790503944 in 189 msec 2023-07-19 18:15:06,120 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-19 18:15:06,120 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=fc3bcf08d6bff82f0c5a86c17edbb657, ASSIGN in 347 msec 2023-07-19 18:15:06,120 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:15:06,120 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790506120"}]},"ts":"1689790506120"} 2023-07-19 18:15:06,121 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-19 18:15:06,125 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:15:06,126 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 426 msec 2023-07-19 18:15:06,177 INFO [jenkins-hbase4:33827] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 18:15:06,179 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=fc387cdbe5b949b885e0bb11a247eb9c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:06,179 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689790506179"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790506179"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790506179"}]},"ts":"1689790506179"} 2023-07-19 18:15:06,181 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure fc387cdbe5b949b885e0bb11a247eb9c, server=jenkins-hbase4.apache.org,38691,1689790504269}] 2023-07-19 18:15:06,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-19 18:15:06,333 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:06,334 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:15:06,335 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44418, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:15:06,341 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. 2023-07-19 18:15:06,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fc387cdbe5b949b885e0bb11a247eb9c, NAME => 'np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:15:06,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:06,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:06,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:06,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:06,343 INFO [StoreOpener-fc387cdbe5b949b885e0bb11a247eb9c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:06,344 DEBUG [StoreOpener-fc387cdbe5b949b885e0bb11a247eb9c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c/fam1 2023-07-19 18:15:06,344 DEBUG [StoreOpener-fc387cdbe5b949b885e0bb11a247eb9c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c/fam1 2023-07-19 18:15:06,345 INFO [StoreOpener-fc387cdbe5b949b885e0bb11a247eb9c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fc387cdbe5b949b885e0bb11a247eb9c columnFamilyName fam1 2023-07-19 18:15:06,345 INFO [StoreOpener-fc387cdbe5b949b885e0bb11a247eb9c-1] regionserver.HStore(310): Store=fc387cdbe5b949b885e0bb11a247eb9c/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:06,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:06,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:06,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:06,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:06,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fc387cdbe5b949b885e0bb11a247eb9c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9475484480, jitterRate=-0.11752673983573914}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:06,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fc387cdbe5b949b885e0bb11a247eb9c: 2023-07-19 18:15:06,354 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c., pid=18, masterSystemTime=1689790506333 2023-07-19 18:15:06,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. 2023-07-19 18:15:06,359 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. 2023-07-19 18:15:06,359 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=fc387cdbe5b949b885e0bb11a247eb9c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:06,359 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689790506359"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790506359"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790506359"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790506359"}]},"ts":"1689790506359"} 2023-07-19 18:15:06,362 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-19 18:15:06,362 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure fc387cdbe5b949b885e0bb11a247eb9c, server=jenkins-hbase4.apache.org,38691,1689790504269 in 180 msec 2023-07-19 18:15:06,364 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-19 18:15:06,365 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=fc387cdbe5b949b885e0bb11a247eb9c, ASSIGN in 337 msec 2023-07-19 18:15:06,365 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:15:06,365 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790506365"}]},"ts":"1689790506365"} 2023-07-19 18:15:06,367 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-19 18:15:06,369 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:15:06,371 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 469 msec 2023-07-19 18:15:06,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-19 18:15:06,510 INFO [Listener at localhost/37435] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-19 18:15:06,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:06,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-19 18:15:06,518 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:15:06,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-19 18:15:06,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 18:15:06,542 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:15:06,543 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54458, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:15:06,550 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=38 msec 2023-07-19 18:15:06,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 18:15:06,627 INFO [Listener at localhost/37435] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-19 18:15:06,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:06,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:06,630 INFO [Listener at localhost/37435] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-19 18:15:06,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-19 18:15:06,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-19 18:15:06,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 18:15:06,635 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790506635"}]},"ts":"1689790506635"} 2023-07-19 18:15:06,636 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-19 18:15:06,638 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-19 18:15:06,638 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=fc387cdbe5b949b885e0bb11a247eb9c, UNASSIGN}] 2023-07-19 18:15:06,639 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=fc387cdbe5b949b885e0bb11a247eb9c, UNASSIGN 2023-07-19 18:15:06,640 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=fc387cdbe5b949b885e0bb11a247eb9c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:06,640 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689790506640"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790506640"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790506640"}]},"ts":"1689790506640"} 2023-07-19 18:15:06,644 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure fc387cdbe5b949b885e0bb11a247eb9c, server=jenkins-hbase4.apache.org,38691,1689790504269}] 2023-07-19 18:15:06,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 18:15:06,796 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:06,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fc387cdbe5b949b885e0bb11a247eb9c, disabling compactions & flushes 2023-07-19 18:15:06,797 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. 2023-07-19 18:15:06,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. 2023-07-19 18:15:06,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. after waiting 0 ms 2023-07-19 18:15:06,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. 2023-07-19 18:15:06,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:15:06,801 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c. 2023-07-19 18:15:06,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fc387cdbe5b949b885e0bb11a247eb9c: 2023-07-19 18:15:06,803 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:06,803 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=fc387cdbe5b949b885e0bb11a247eb9c, regionState=CLOSED 2023-07-19 18:15:06,804 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689790506803"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790506803"}]},"ts":"1689790506803"} 2023-07-19 18:15:06,806 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-19 18:15:06,806 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure fc387cdbe5b949b885e0bb11a247eb9c, server=jenkins-hbase4.apache.org,38691,1689790504269 in 161 msec 2023-07-19 18:15:06,808 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-19 18:15:06,808 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=fc387cdbe5b949b885e0bb11a247eb9c, UNASSIGN in 168 msec 2023-07-19 18:15:06,809 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790506808"}]},"ts":"1689790506808"} 2023-07-19 18:15:06,810 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-19 18:15:06,812 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-19 18:15:06,814 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 182 msec 2023-07-19 18:15:06,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 18:15:06,937 INFO [Listener at localhost/37435] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-19 18:15:06,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-19 18:15:06,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-19 18:15:06,941 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 18:15:06,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-19 18:15:06,942 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 18:15:06,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:06,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 18:15:06,946 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:06,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-19 18:15:06,948 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c/fam1, FileablePath, hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c/recovered.edits] 2023-07-19 18:15:06,954 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c/recovered.edits/4.seqid to hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/archive/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c/recovered.edits/4.seqid 2023-07-19 18:15:06,955 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/.tmp/data/np1/table1/fc387cdbe5b949b885e0bb11a247eb9c 2023-07-19 18:15:06,955 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-19 18:15:06,957 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 18:15:06,959 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-19 18:15:06,961 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-19 18:15:06,962 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 18:15:06,962 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-19 18:15:06,962 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790506962"}]},"ts":"9223372036854775807"} 2023-07-19 18:15:06,964 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 18:15:06,964 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => fc387cdbe5b949b885e0bb11a247eb9c, NAME => 'np1:table1,,1689790505900.fc387cdbe5b949b885e0bb11a247eb9c.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 18:15:06,964 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-19 18:15:06,964 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689790506964"}]},"ts":"9223372036854775807"} 2023-07-19 18:15:06,965 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-19 18:15:06,969 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 18:15:06,970 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 31 msec 2023-07-19 18:15:07,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-19 18:15:07,049 INFO [Listener at localhost/37435] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-19 18:15:07,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-19 18:15:07,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-19 18:15:07,065 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 18:15:07,068 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 18:15:07,070 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 18:15:07,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-19 18:15:07,072 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-19 18:15:07,072 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:15:07,073 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 18:15:07,075 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 18:15:07,077 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 20 msec 2023-07-19 18:15:07,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33827] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-19 18:15:07,172 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-19 18:15:07,172 INFO [Listener at localhost/37435] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-19 18:15:07,172 DEBUG [Listener at localhost/37435] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7c5dfa12 to 127.0.0.1:55505 2023-07-19 18:15:07,172 DEBUG [Listener at localhost/37435] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:07,172 DEBUG [Listener at localhost/37435] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-19 18:15:07,173 DEBUG [Listener at localhost/37435] util.JVMClusterUtil(257): Found active master hash=1707062626, stopped=false 2023-07-19 18:15:07,173 DEBUG [Listener at localhost/37435] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 18:15:07,173 DEBUG [Listener at localhost/37435] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 18:15:07,173 DEBUG [Listener at localhost/37435] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-19 18:15:07,173 INFO [Listener at localhost/37435] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,33827,1689790503765 2023-07-19 18:15:07,180 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:07,180 INFO [Listener at localhost/37435] procedure2.ProcedureExecutor(629): Stopping 2023-07-19 18:15:07,181 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:07,180 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:07,182 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:07,180 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:07,180 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:07,182 DEBUG [Listener at localhost/37435] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2e2695be to 127.0.0.1:55505 2023-07-19 18:15:07,182 DEBUG [Listener at localhost/37435] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:07,183 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:07,183 INFO [Listener at localhost/37435] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45277,1689790503944' ***** 2023-07-19 18:15:07,183 INFO [Listener at localhost/37435] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 18:15:07,183 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:07,182 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:07,183 INFO [Listener at localhost/37435] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38039,1689790504113' ***** 2023-07-19 18:15:07,183 INFO [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:07,183 INFO [Listener at localhost/37435] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 18:15:07,183 INFO [Listener at localhost/37435] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38691,1689790504269' ***** 2023-07-19 18:15:07,184 INFO [Listener at localhost/37435] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 18:15:07,183 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:07,187 INFO [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:07,196 INFO [RS:0;jenkins-hbase4:45277] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@27061e18{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:07,196 INFO [RS:2;jenkins-hbase4:38691] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@64b278aa{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:07,196 INFO [RS:1;jenkins-hbase4:38039] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4c4e4298{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:07,196 INFO [RS:0;jenkins-hbase4:45277] server.AbstractConnector(383): Stopped ServerConnector@619d9d61{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:07,196 INFO [RS:0;jenkins-hbase4:45277] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:07,196 INFO [RS:2;jenkins-hbase4:38691] server.AbstractConnector(383): Stopped ServerConnector@7596ce29{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:07,197 INFO [RS:0;jenkins-hbase4:45277] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d8a119b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:07,197 INFO [RS:1;jenkins-hbase4:38039] server.AbstractConnector(383): Stopped ServerConnector@2f107ff7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:07,199 INFO [RS:0;jenkins-hbase4:45277] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ea82fff{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:07,197 INFO [RS:2;jenkins-hbase4:38691] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:07,199 INFO [RS:1;jenkins-hbase4:38039] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:07,199 INFO [RS:2;jenkins-hbase4:38691] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7fb867a7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:07,200 INFO [RS:1;jenkins-hbase4:38039] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3de75ac7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:07,200 INFO [RS:2;jenkins-hbase4:38691] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@53ecbb85{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:07,200 INFO [RS:0;jenkins-hbase4:45277] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 18:15:07,200 INFO [RS:1;jenkins-hbase4:38039] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@35eb1866{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:07,200 INFO [RS:0;jenkins-hbase4:45277] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 18:15:07,200 INFO [RS:0;jenkins-hbase4:45277] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 18:15:07,200 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 18:15:07,200 INFO [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(3305): Received CLOSE for fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:07,201 INFO [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:07,201 INFO [RS:2;jenkins-hbase4:38691] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 18:15:07,201 DEBUG [RS:0;jenkins-hbase4:45277] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x03aac91c to 127.0.0.1:55505 2023-07-19 18:15:07,201 INFO [RS:2;jenkins-hbase4:38691] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 18:15:07,202 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 18:15:07,202 DEBUG [RS:0;jenkins-hbase4:45277] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:07,201 INFO [RS:1;jenkins-hbase4:38039] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 18:15:07,203 INFO [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 18:15:07,203 INFO [RS:1;jenkins-hbase4:38039] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 18:15:07,203 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fc3bcf08d6bff82f0c5a86c17edbb657, disabling compactions & flushes 2023-07-19 18:15:07,202 INFO [RS:2;jenkins-hbase4:38691] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 18:15:07,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:07,204 INFO [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:07,204 INFO [RS:1;jenkins-hbase4:38039] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 18:15:07,204 DEBUG [RS:2;jenkins-hbase4:38691] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4a86aaad to 127.0.0.1:55505 2023-07-19 18:15:07,204 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(3305): Received CLOSE for fb86e4485da6b70be23465c48bc14fd0 2023-07-19 18:15:07,203 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 18:15:07,203 DEBUG [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1478): Online Regions={fc3bcf08d6bff82f0c5a86c17edbb657=hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657.} 2023-07-19 18:15:07,204 DEBUG [RS:2;jenkins-hbase4:38691] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:07,204 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(3305): Received CLOSE for 47cade7ff99f47216401129cba97f5af 2023-07-19 18:15:07,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:07,205 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:07,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fb86e4485da6b70be23465c48bc14fd0, disabling compactions & flushes 2023-07-19 18:15:07,205 DEBUG [RS:1;jenkins-hbase4:38039] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x115e22e0 to 127.0.0.1:55505 2023-07-19 18:15:07,205 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:07,205 INFO [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38691,1689790504269; all regions closed. 2023-07-19 18:15:07,204 DEBUG [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1504): Waiting on fc3bcf08d6bff82f0c5a86c17edbb657 2023-07-19 18:15:07,205 DEBUG [RS:2;jenkins-hbase4:38691] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-19 18:15:07,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:07,205 DEBUG [RS:1;jenkins-hbase4:38039] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:07,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. after waiting 0 ms 2023-07-19 18:15:07,206 INFO [RS:1;jenkins-hbase4:38039] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 18:15:07,206 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. after waiting 0 ms 2023-07-19 18:15:07,206 INFO [RS:1;jenkins-hbase4:38039] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 18:15:07,206 INFO [RS:1;jenkins-hbase4:38039] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 18:15:07,206 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:07,206 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-19 18:15:07,206 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:07,206 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing fb86e4485da6b70be23465c48bc14fd0 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-19 18:15:07,209 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-19 18:15:07,209 DEBUG [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, fb86e4485da6b70be23465c48bc14fd0=hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0., 47cade7ff99f47216401129cba97f5af=hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af.} 2023-07-19 18:15:07,209 DEBUG [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1504): Waiting on 1588230740, 47cade7ff99f47216401129cba97f5af, fb86e4485da6b70be23465c48bc14fd0 2023-07-19 18:15:07,211 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 18:15:07,211 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 18:15:07,211 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 18:15:07,211 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 18:15:07,212 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 18:15:07,212 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-19 18:15:07,212 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:07,213 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:07,217 DEBUG [RS:2;jenkins-hbase4:38691] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/oldWALs 2023-07-19 18:15:07,217 INFO [RS:2;jenkins-hbase4:38691] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38691%2C1689790504269:(num 1689790504946) 2023-07-19 18:15:07,217 DEBUG [RS:2;jenkins-hbase4:38691] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:07,217 INFO [RS:2;jenkins-hbase4:38691] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:07,218 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:07,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/quota/fc3bcf08d6bff82f0c5a86c17edbb657/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:15:07,219 INFO [RS:2;jenkins-hbase4:38691] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:07,219 INFO [RS:2;jenkins-hbase4:38691] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 18:15:07,219 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:07,219 INFO [RS:2;jenkins-hbase4:38691] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 18:15:07,220 INFO [RS:2;jenkins-hbase4:38691] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 18:15:07,221 INFO [RS:2;jenkins-hbase4:38691] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38691 2023-07-19 18:15:07,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:07,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fc3bcf08d6bff82f0c5a86c17edbb657: 2023-07-19 18:15:07,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689790505699.fc3bcf08d6bff82f0c5a86c17edbb657. 2023-07-19 18:15:07,225 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:07,225 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:07,225 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:07,225 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:07,225 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38691,1689790504269 2023-07-19 18:15:07,225 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:07,225 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:07,229 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38691,1689790504269] 2023-07-19 18:15:07,229 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38691,1689790504269; numProcessing=1 2023-07-19 18:15:07,232 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38691,1689790504269 already deleted, retry=false 2023-07-19 18:15:07,232 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38691,1689790504269 expired; onlineServers=2 2023-07-19 18:15:07,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0/.tmp/m/08a9c031b6b442a8b2199faafd086d1a 2023-07-19 18:15:07,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0/.tmp/m/08a9c031b6b442a8b2199faafd086d1a as hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0/m/08a9c031b6b442a8b2199faafd086d1a 2023-07-19 18:15:07,257 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0/m/08a9c031b6b442a8b2199faafd086d1a, entries=1, sequenceid=7, filesize=4.9 K 2023-07-19 18:15:07,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for fb86e4485da6b70be23465c48bc14fd0 in 53ms, sequenceid=7, compaction requested=false 2023-07-19 18:15:07,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-19 18:15:07,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/rsgroup/fb86e4485da6b70be23465c48bc14fd0/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-19 18:15:07,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 18:15:07,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:07,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fb86e4485da6b70be23465c48bc14fd0: 2023-07-19 18:15:07,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689790505284.fb86e4485da6b70be23465c48bc14fd0. 2023-07-19 18:15:07,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 47cade7ff99f47216401129cba97f5af, disabling compactions & flushes 2023-07-19 18:15:07,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:07,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:07,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. after waiting 0 ms 2023-07-19 18:15:07,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:07,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 47cade7ff99f47216401129cba97f5af 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-19 18:15:07,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/namespace/47cade7ff99f47216401129cba97f5af/.tmp/info/def58f9f14884502b0e828403295d69d 2023-07-19 18:15:07,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for def58f9f14884502b0e828403295d69d 2023-07-19 18:15:07,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/namespace/47cade7ff99f47216401129cba97f5af/.tmp/info/def58f9f14884502b0e828403295d69d as hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/namespace/47cade7ff99f47216401129cba97f5af/info/def58f9f14884502b0e828403295d69d 2023-07-19 18:15:07,290 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for def58f9f14884502b0e828403295d69d 2023-07-19 18:15:07,290 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/namespace/47cade7ff99f47216401129cba97f5af/info/def58f9f14884502b0e828403295d69d, entries=3, sequenceid=8, filesize=5.0 K 2023-07-19 18:15:07,290 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 47cade7ff99f47216401129cba97f5af in 24ms, sequenceid=8, compaction requested=false 2023-07-19 18:15:07,291 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-19 18:15:07,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/namespace/47cade7ff99f47216401129cba97f5af/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-19 18:15:07,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:07,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 47cade7ff99f47216401129cba97f5af: 2023-07-19 18:15:07,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689790505136.47cade7ff99f47216401129cba97f5af. 2023-07-19 18:15:07,380 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:07,380 INFO [RS:2;jenkins-hbase4:38691] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38691,1689790504269; zookeeper connection closed. 2023-07-19 18:15:07,380 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38691-0x1017ecb5e400003, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:07,381 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@319c2407] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@319c2407 2023-07-19 18:15:07,406 INFO [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45277,1689790503944; all regions closed. 2023-07-19 18:15:07,406 DEBUG [RS:0;jenkins-hbase4:45277] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-19 18:15:07,411 DEBUG [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-19 18:15:07,414 DEBUG [RS:0;jenkins-hbase4:45277] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/oldWALs 2023-07-19 18:15:07,414 INFO [RS:0;jenkins-hbase4:45277] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45277%2C1689790503944:(num 1689790504945) 2023-07-19 18:15:07,414 DEBUG [RS:0;jenkins-hbase4:45277] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:07,414 INFO [RS:0;jenkins-hbase4:45277] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:07,414 INFO [RS:0;jenkins-hbase4:45277] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:07,414 INFO [RS:0;jenkins-hbase4:45277] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 18:15:07,414 INFO [RS:0;jenkins-hbase4:45277] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 18:15:07,414 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:07,414 INFO [RS:0;jenkins-hbase4:45277] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 18:15:07,415 INFO [RS:0;jenkins-hbase4:45277] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45277 2023-07-19 18:15:07,419 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:07,419 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:07,419 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45277,1689790503944 2023-07-19 18:15:07,420 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45277,1689790503944] 2023-07-19 18:15:07,420 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45277,1689790503944; numProcessing=2 2023-07-19 18:15:07,421 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45277,1689790503944 already deleted, retry=false 2023-07-19 18:15:07,422 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45277,1689790503944 expired; onlineServers=1 2023-07-19 18:15:07,612 DEBUG [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-19 18:15:07,645 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/.tmp/info/bfa543e9c1c54e488f661fb79b1242ea 2023-07-19 18:15:07,652 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bfa543e9c1c54e488f661fb79b1242ea 2023-07-19 18:15:07,666 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/.tmp/rep_barrier/67da5eb61e774387b34745ec24f2c52c 2023-07-19 18:15:07,671 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 67da5eb61e774387b34745ec24f2c52c 2023-07-19 18:15:07,691 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/.tmp/table/a3004834f6f245018e4a437d69e3b60a 2023-07-19 18:15:07,697 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a3004834f6f245018e4a437d69e3b60a 2023-07-19 18:15:07,698 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/.tmp/info/bfa543e9c1c54e488f661fb79b1242ea as hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/info/bfa543e9c1c54e488f661fb79b1242ea 2023-07-19 18:15:07,704 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bfa543e9c1c54e488f661fb79b1242ea 2023-07-19 18:15:07,704 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/info/bfa543e9c1c54e488f661fb79b1242ea, entries=32, sequenceid=31, filesize=8.5 K 2023-07-19 18:15:07,705 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/.tmp/rep_barrier/67da5eb61e774387b34745ec24f2c52c as hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/rep_barrier/67da5eb61e774387b34745ec24f2c52c 2023-07-19 18:15:07,712 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 67da5eb61e774387b34745ec24f2c52c 2023-07-19 18:15:07,712 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/rep_barrier/67da5eb61e774387b34745ec24f2c52c, entries=1, sequenceid=31, filesize=4.9 K 2023-07-19 18:15:07,713 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/.tmp/table/a3004834f6f245018e4a437d69e3b60a as hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/table/a3004834f6f245018e4a437d69e3b60a 2023-07-19 18:15:07,720 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a3004834f6f245018e4a437d69e3b60a 2023-07-19 18:15:07,720 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/table/a3004834f6f245018e4a437d69e3b60a, entries=8, sequenceid=31, filesize=5.2 K 2023-07-19 18:15:07,721 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 509ms, sequenceid=31, compaction requested=false 2023-07-19 18:15:07,721 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-19 18:15:07,735 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-19 18:15:07,736 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 18:15:07,736 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 18:15:07,736 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 18:15:07,736 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-19 18:15:07,806 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-19 18:15:07,806 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-19 18:15:07,812 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38039,1689790504113; all regions closed. 2023-07-19 18:15:07,812 DEBUG [RS:1;jenkins-hbase4:38039] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-19 18:15:07,819 DEBUG [RS:1;jenkins-hbase4:38039] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/oldWALs 2023-07-19 18:15:07,819 INFO [RS:1;jenkins-hbase4:38039] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38039%2C1689790504113.meta:.meta(num 1689790505066) 2023-07-19 18:15:07,826 DEBUG [RS:1;jenkins-hbase4:38039] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/oldWALs 2023-07-19 18:15:07,826 INFO [RS:1;jenkins-hbase4:38039] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38039%2C1689790504113:(num 1689790504946) 2023-07-19 18:15:07,826 DEBUG [RS:1;jenkins-hbase4:38039] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:07,826 INFO [RS:1;jenkins-hbase4:38039] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:07,826 INFO [RS:1;jenkins-hbase4:38039] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:07,826 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:07,827 INFO [RS:1;jenkins-hbase4:38039] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38039 2023-07-19 18:15:07,832 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38039,1689790504113 2023-07-19 18:15:07,832 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:07,833 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38039,1689790504113] 2023-07-19 18:15:07,833 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38039,1689790504113; numProcessing=3 2023-07-19 18:15:07,835 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38039,1689790504113 already deleted, retry=false 2023-07-19 18:15:07,835 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38039,1689790504113 expired; onlineServers=0 2023-07-19 18:15:07,835 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33827,1689790503765' ***** 2023-07-19 18:15:07,835 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-19 18:15:07,836 DEBUG [M:0;jenkins-hbase4:33827] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61802d0d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:15:07,836 INFO [M:0;jenkins-hbase4:33827] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:07,838 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-19 18:15:07,838 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:07,838 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:15:07,838 INFO [M:0;jenkins-hbase4:33827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@38a9d218{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 18:15:07,839 INFO [M:0;jenkins-hbase4:33827] server.AbstractConnector(383): Stopped ServerConnector@78e67007{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:07,839 INFO [M:0;jenkins-hbase4:33827] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:07,839 INFO [M:0;jenkins-hbase4:33827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f546421{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:07,840 INFO [M:0;jenkins-hbase4:33827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e960f4c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:07,840 INFO [M:0;jenkins-hbase4:33827] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33827,1689790503765 2023-07-19 18:15:07,840 INFO [M:0;jenkins-hbase4:33827] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33827,1689790503765; all regions closed. 2023-07-19 18:15:07,840 DEBUG [M:0;jenkins-hbase4:33827] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:07,840 INFO [M:0;jenkins-hbase4:33827] master.HMaster(1491): Stopping master jetty server 2023-07-19 18:15:07,841 INFO [M:0;jenkins-hbase4:33827] server.AbstractConnector(383): Stopped ServerConnector@49e1e3dd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:07,841 DEBUG [M:0;jenkins-hbase4:33827] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-19 18:15:07,841 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-19 18:15:07,841 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689790504678] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689790504678,5,FailOnTimeoutGroup] 2023-07-19 18:15:07,841 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689790504679] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689790504679,5,FailOnTimeoutGroup] 2023-07-19 18:15:07,841 DEBUG [M:0;jenkins-hbase4:33827] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-19 18:15:07,843 INFO [M:0;jenkins-hbase4:33827] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-19 18:15:07,843 INFO [M:0;jenkins-hbase4:33827] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-19 18:15:07,844 INFO [M:0;jenkins-hbase4:33827] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:07,844 DEBUG [M:0;jenkins-hbase4:33827] master.HMaster(1512): Stopping service threads 2023-07-19 18:15:07,844 INFO [M:0;jenkins-hbase4:33827] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-19 18:15:07,844 ERROR [M:0;jenkins-hbase4:33827] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-19 18:15:07,844 INFO [M:0;jenkins-hbase4:33827] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-19 18:15:07,845 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-19 18:15:07,845 DEBUG [M:0;jenkins-hbase4:33827] zookeeper.ZKUtil(398): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-19 18:15:07,845 WARN [M:0;jenkins-hbase4:33827] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-19 18:15:07,845 INFO [M:0;jenkins-hbase4:33827] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-19 18:15:07,845 INFO [M:0;jenkins-hbase4:33827] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-19 18:15:07,846 DEBUG [M:0;jenkins-hbase4:33827] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 18:15:07,846 INFO [M:0;jenkins-hbase4:33827] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:07,846 DEBUG [M:0;jenkins-hbase4:33827] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:07,846 DEBUG [M:0;jenkins-hbase4:33827] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 18:15:07,846 DEBUG [M:0;jenkins-hbase4:33827] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:07,846 INFO [M:0;jenkins-hbase4:33827] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.94 KB heapSize=109.10 KB 2023-07-19 18:15:07,858 INFO [M:0;jenkins-hbase4:33827] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.94 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f0d791e5f675455290eec4b0c2cb40a6 2023-07-19 18:15:07,864 DEBUG [M:0;jenkins-hbase4:33827] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f0d791e5f675455290eec4b0c2cb40a6 as hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f0d791e5f675455290eec4b0c2cb40a6 2023-07-19 18:15:07,871 INFO [M:0;jenkins-hbase4:33827] regionserver.HStore(1080): Added hdfs://localhost:39265/user/jenkins/test-data/d3c8a704-41ea-8334-5e3e-a88e2e85efa6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f0d791e5f675455290eec4b0c2cb40a6, entries=24, sequenceid=194, filesize=12.4 K 2023-07-19 18:15:07,871 INFO [M:0;jenkins-hbase4:33827] regionserver.HRegion(2948): Finished flush of dataSize ~92.94 KB/95171, heapSize ~109.09 KB/111704, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=194, compaction requested=false 2023-07-19 18:15:07,873 INFO [M:0;jenkins-hbase4:33827] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:07,873 DEBUG [M:0;jenkins-hbase4:33827] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 18:15:07,879 INFO [M:0;jenkins-hbase4:33827] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-19 18:15:07,879 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:07,880 INFO [M:0;jenkins-hbase4:33827] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33827 2023-07-19 18:15:07,882 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:07,882 INFO [RS:0;jenkins-hbase4:45277] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45277,1689790503944; zookeeper connection closed. 2023-07-19 18:15:07,882 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:45277-0x1017ecb5e400001, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:07,884 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5417992a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5417992a 2023-07-19 18:15:07,884 DEBUG [M:0;jenkins-hbase4:33827] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,33827,1689790503765 already deleted, retry=false 2023-07-19 18:15:07,982 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:07,982 INFO [RS:1;jenkins-hbase4:38039] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38039,1689790504113; zookeeper connection closed. 2023-07-19 18:15:07,983 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): regionserver:38039-0x1017ecb5e400002, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:07,983 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@24b0924e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@24b0924e 2023-07-19 18:15:07,983 INFO [Listener at localhost/37435] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-19 18:15:08,083 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:08,083 INFO [M:0;jenkins-hbase4:33827] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33827,1689790503765; zookeeper connection closed. 2023-07-19 18:15:08,083 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): master:33827-0x1017ecb5e400000, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:08,084 WARN [Listener at localhost/37435] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 18:15:08,087 INFO [Listener at localhost/37435] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:08,193 WARN [BP-1752318598-172.31.14.131-1689790502948 heartbeating to localhost/127.0.0.1:39265] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 18:15:08,193 WARN [BP-1752318598-172.31.14.131-1689790502948 heartbeating to localhost/127.0.0.1:39265] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1752318598-172.31.14.131-1689790502948 (Datanode Uuid aa8454ef-ff0b-48ae-8802-2a1d3aab3d43) service to localhost/127.0.0.1:39265 2023-07-19 18:15:08,194 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/cluster_7389c4b2-efdf-2cf4-3ef3-9acc238f7553/dfs/data/data5/current/BP-1752318598-172.31.14.131-1689790502948] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:08,194 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/cluster_7389c4b2-efdf-2cf4-3ef3-9acc238f7553/dfs/data/data6/current/BP-1752318598-172.31.14.131-1689790502948] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:08,197 WARN [Listener at localhost/37435] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 18:15:08,202 INFO [Listener at localhost/37435] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:08,306 WARN [BP-1752318598-172.31.14.131-1689790502948 heartbeating to localhost/127.0.0.1:39265] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 18:15:08,306 WARN [BP-1752318598-172.31.14.131-1689790502948 heartbeating to localhost/127.0.0.1:39265] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1752318598-172.31.14.131-1689790502948 (Datanode Uuid 6f486418-2bef-444c-98ea-c19926f1d7ff) service to localhost/127.0.0.1:39265 2023-07-19 18:15:08,306 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/cluster_7389c4b2-efdf-2cf4-3ef3-9acc238f7553/dfs/data/data3/current/BP-1752318598-172.31.14.131-1689790502948] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:08,307 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/cluster_7389c4b2-efdf-2cf4-3ef3-9acc238f7553/dfs/data/data4/current/BP-1752318598-172.31.14.131-1689790502948] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:08,308 WARN [Listener at localhost/37435] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 18:15:08,312 INFO [Listener at localhost/37435] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:08,417 WARN [BP-1752318598-172.31.14.131-1689790502948 heartbeating to localhost/127.0.0.1:39265] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 18:15:08,417 WARN [BP-1752318598-172.31.14.131-1689790502948 heartbeating to localhost/127.0.0.1:39265] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1752318598-172.31.14.131-1689790502948 (Datanode Uuid 7d00d50d-525b-4344-964b-46435a7e595d) service to localhost/127.0.0.1:39265 2023-07-19 18:15:08,418 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/cluster_7389c4b2-efdf-2cf4-3ef3-9acc238f7553/dfs/data/data1/current/BP-1752318598-172.31.14.131-1689790502948] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:08,418 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/cluster_7389c4b2-efdf-2cf4-3ef3-9acc238f7553/dfs/data/data2/current/BP-1752318598-172.31.14.131-1689790502948] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:08,428 INFO [Listener at localhost/37435] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:08,544 INFO [Listener at localhost/37435] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-19 18:15:08,573 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-19 18:15:08,573 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-19 18:15:08,574 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.log.dir so I do NOT create it in target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8 2023-07-19 18:15:08,574 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6be4824e-3f78-b4a3-6569-d6f08cabf95a/hadoop.tmp.dir so I do NOT create it in target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8 2023-07-19 18:15:08,574 INFO [Listener at localhost/37435] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f, deleteOnExit=true 2023-07-19 18:15:08,574 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-19 18:15:08,574 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/test.cache.data in system properties and HBase conf 2023-07-19 18:15:08,574 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.tmp.dir in system properties and HBase conf 2023-07-19 18:15:08,574 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.log.dir in system properties and HBase conf 2023-07-19 18:15:08,574 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-19 18:15:08,574 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-19 18:15:08,574 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-19 18:15:08,575 DEBUG [Listener at localhost/37435] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-19 18:15:08,575 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-19 18:15:08,575 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-19 18:15:08,575 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-19 18:15:08,575 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 18:15:08,575 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-19 18:15:08,575 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-19 18:15:08,575 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 18:15:08,575 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 18:15:08,576 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-19 18:15:08,576 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/nfs.dump.dir in system properties and HBase conf 2023-07-19 18:15:08,576 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/java.io.tmpdir in system properties and HBase conf 2023-07-19 18:15:08,576 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 18:15:08,576 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-19 18:15:08,576 INFO [Listener at localhost/37435] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-19 18:15:08,580 WARN [Listener at localhost/37435] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 18:15:08,580 WARN [Listener at localhost/37435] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 18:15:08,624 WARN [Listener at localhost/37435] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:15:08,626 INFO [Listener at localhost/37435] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:15:08,632 INFO [Listener at localhost/37435] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/java.io.tmpdir/Jetty_localhost_36677_hdfs____72e5tp/webapp 2023-07-19 18:15:08,642 DEBUG [Listener at localhost/37435-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1017ecb5e40000a, quorum=127.0.0.1:55505, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-19 18:15:08,642 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1017ecb5e40000a, quorum=127.0.0.1:55505, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-19 18:15:08,725 INFO [Listener at localhost/37435] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36677 2023-07-19 18:15:08,730 WARN [Listener at localhost/37435] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 18:15:08,730 WARN [Listener at localhost/37435] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 18:15:08,771 WARN [Listener at localhost/37897] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:15:08,787 WARN [Listener at localhost/37897] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 18:15:08,789 WARN [Listener at localhost/37897] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:15:08,790 INFO [Listener at localhost/37897] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:15:08,794 INFO [Listener at localhost/37897] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/java.io.tmpdir/Jetty_localhost_46663_datanode____.noj014/webapp 2023-07-19 18:15:08,888 INFO [Listener at localhost/37897] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46663 2023-07-19 18:15:08,898 WARN [Listener at localhost/35083] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:15:08,914 WARN [Listener at localhost/35083] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 18:15:08,917 WARN [Listener at localhost/35083] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:15:08,918 INFO [Listener at localhost/35083] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:15:08,922 INFO [Listener at localhost/35083] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/java.io.tmpdir/Jetty_localhost_46759_datanode____.ji2xc0/webapp 2023-07-19 18:15:09,002 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2d8cc53977f58b06: Processing first storage report for DS-62660876-5554-4c01-8f44-d352bdaabc93 from datanode e8e9f8e6-dfa9-4ff3-a1e1-03fff615490a 2023-07-19 18:15:09,003 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2d8cc53977f58b06: from storage DS-62660876-5554-4c01-8f44-d352bdaabc93 node DatanodeRegistration(127.0.0.1:37729, datanodeUuid=e8e9f8e6-dfa9-4ff3-a1e1-03fff615490a, infoPort=44603, infoSecurePort=0, ipcPort=35083, storageInfo=lv=-57;cid=testClusterID;nsid=412863164;c=1689790508583), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:15:09,003 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2d8cc53977f58b06: Processing first storage report for DS-ec6cfdd2-87ed-48c7-a996-dce0a3b7d646 from datanode e8e9f8e6-dfa9-4ff3-a1e1-03fff615490a 2023-07-19 18:15:09,003 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2d8cc53977f58b06: from storage DS-ec6cfdd2-87ed-48c7-a996-dce0a3b7d646 node DatanodeRegistration(127.0.0.1:37729, datanodeUuid=e8e9f8e6-dfa9-4ff3-a1e1-03fff615490a, infoPort=44603, infoSecurePort=0, ipcPort=35083, storageInfo=lv=-57;cid=testClusterID;nsid=412863164;c=1689790508583), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:15:09,022 INFO [Listener at localhost/35083] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46759 2023-07-19 18:15:09,030 WARN [Listener at localhost/40907] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:15:09,047 WARN [Listener at localhost/40907] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 18:15:09,050 WARN [Listener at localhost/40907] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 18:15:09,051 INFO [Listener at localhost/40907] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 18:15:09,056 INFO [Listener at localhost/40907] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/java.io.tmpdir/Jetty_localhost_34067_datanode____ewx3h3/webapp 2023-07-19 18:15:09,138 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc82ceac5ba8ca988: Processing first storage report for DS-b82a440c-7798-431e-9221-2eece5a20bac from datanode 3c8e98e5-038f-46d7-b4c6-412a2b6e394b 2023-07-19 18:15:09,138 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc82ceac5ba8ca988: from storage DS-b82a440c-7798-431e-9221-2eece5a20bac node DatanodeRegistration(127.0.0.1:32927, datanodeUuid=3c8e98e5-038f-46d7-b4c6-412a2b6e394b, infoPort=36611, infoSecurePort=0, ipcPort=40907, storageInfo=lv=-57;cid=testClusterID;nsid=412863164;c=1689790508583), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:15:09,138 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc82ceac5ba8ca988: Processing first storage report for DS-7a050f10-7a88-40b6-8bdf-c3ceb301a53d from datanode 3c8e98e5-038f-46d7-b4c6-412a2b6e394b 2023-07-19 18:15:09,138 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc82ceac5ba8ca988: from storage DS-7a050f10-7a88-40b6-8bdf-c3ceb301a53d node DatanodeRegistration(127.0.0.1:32927, datanodeUuid=3c8e98e5-038f-46d7-b4c6-412a2b6e394b, infoPort=36611, infoSecurePort=0, ipcPort=40907, storageInfo=lv=-57;cid=testClusterID;nsid=412863164;c=1689790508583), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:15:09,171 INFO [Listener at localhost/40907] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34067 2023-07-19 18:15:09,177 WARN [Listener at localhost/41015] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 18:15:09,274 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa36165e8e50d37b0: Processing first storage report for DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4 from datanode 660f14bb-e20b-4d77-99a5-a1a137a0e71d 2023-07-19 18:15:09,274 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa36165e8e50d37b0: from storage DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4 node DatanodeRegistration(127.0.0.1:45633, datanodeUuid=660f14bb-e20b-4d77-99a5-a1a137a0e71d, infoPort=37409, infoSecurePort=0, ipcPort=41015, storageInfo=lv=-57;cid=testClusterID;nsid=412863164;c=1689790508583), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-19 18:15:09,275 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa36165e8e50d37b0: Processing first storage report for DS-b99b42dd-1af7-4fce-a410-f03a5984582b from datanode 660f14bb-e20b-4d77-99a5-a1a137a0e71d 2023-07-19 18:15:09,275 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa36165e8e50d37b0: from storage DS-b99b42dd-1af7-4fce-a410-f03a5984582b node DatanodeRegistration(127.0.0.1:45633, datanodeUuid=660f14bb-e20b-4d77-99a5-a1a137a0e71d, infoPort=37409, infoSecurePort=0, ipcPort=41015, storageInfo=lv=-57;cid=testClusterID;nsid=412863164;c=1689790508583), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 18:15:09,285 DEBUG [Listener at localhost/41015] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8 2023-07-19 18:15:09,287 INFO [Listener at localhost/41015] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/zookeeper_0, clientPort=50044, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-19 18:15:09,289 INFO [Listener at localhost/41015] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50044 2023-07-19 18:15:09,289 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:09,290 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:09,305 INFO [Listener at localhost/41015] util.FSUtils(471): Created version file at hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8 with version=8 2023-07-19 18:15:09,305 INFO [Listener at localhost/41015] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:41243/user/jenkins/test-data/6cf1c4c9-bd65-191f-6af0-b94d6aeca475/hbase-staging 2023-07-19 18:15:09,306 DEBUG [Listener at localhost/41015] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-19 18:15:09,306 DEBUG [Listener at localhost/41015] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-19 18:15:09,306 DEBUG [Listener at localhost/41015] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-19 18:15:09,306 DEBUG [Listener at localhost/41015] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-19 18:15:09,307 INFO [Listener at localhost/41015] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:15:09,307 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,307 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,307 INFO [Listener at localhost/41015] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:15:09,307 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,307 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:15:09,307 INFO [Listener at localhost/41015] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:15:09,308 INFO [Listener at localhost/41015] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39305 2023-07-19 18:15:09,308 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:09,309 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:09,310 INFO [Listener at localhost/41015] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39305 connecting to ZooKeeper ensemble=127.0.0.1:50044 2023-07-19 18:15:09,317 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:393050x0, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:15:09,318 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39305-0x1017ecb73ea0000 connected 2023-07-19 18:15:09,332 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:15:09,332 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:09,333 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:15:09,333 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39305 2023-07-19 18:15:09,333 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39305 2023-07-19 18:15:09,334 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39305 2023-07-19 18:15:09,334 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39305 2023-07-19 18:15:09,334 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39305 2023-07-19 18:15:09,336 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:15:09,336 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:15:09,336 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:15:09,336 INFO [Listener at localhost/41015] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-19 18:15:09,336 INFO [Listener at localhost/41015] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:15:09,337 INFO [Listener at localhost/41015] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:15:09,337 INFO [Listener at localhost/41015] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:15:09,337 INFO [Listener at localhost/41015] http.HttpServer(1146): Jetty bound to port 45673 2023-07-19 18:15:09,337 INFO [Listener at localhost/41015] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:15:09,338 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,339 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10f8b861{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:15:09,339 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,339 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3dad8b91{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:15:09,452 INFO [Listener at localhost/41015] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:15:09,453 INFO [Listener at localhost/41015] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:15:09,454 INFO [Listener at localhost/41015] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:15:09,454 INFO [Listener at localhost/41015] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 18:15:09,455 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,455 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3f1c21cd{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/java.io.tmpdir/jetty-0_0_0_0-45673-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5255985418726012027/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 18:15:09,457 INFO [Listener at localhost/41015] server.AbstractConnector(333): Started ServerConnector@43c372b2{HTTP/1.1, (http/1.1)}{0.0.0.0:45673} 2023-07-19 18:15:09,457 INFO [Listener at localhost/41015] server.Server(415): Started @43165ms 2023-07-19 18:15:09,457 INFO [Listener at localhost/41015] master.HMaster(444): hbase.rootdir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8, hbase.cluster.distributed=false 2023-07-19 18:15:09,471 INFO [Listener at localhost/41015] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:15:09,471 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,471 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,471 INFO [Listener at localhost/41015] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:15:09,471 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,471 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:15:09,471 INFO [Listener at localhost/41015] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:15:09,473 INFO [Listener at localhost/41015] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33689 2023-07-19 18:15:09,474 INFO [Listener at localhost/41015] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 18:15:09,474 DEBUG [Listener at localhost/41015] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 18:15:09,475 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:09,476 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:09,477 INFO [Listener at localhost/41015] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33689 connecting to ZooKeeper ensemble=127.0.0.1:50044 2023-07-19 18:15:09,481 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:336890x0, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:15:09,482 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33689-0x1017ecb73ea0001 connected 2023-07-19 18:15:09,482 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:15:09,483 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:09,483 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:15:09,484 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33689 2023-07-19 18:15:09,484 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33689 2023-07-19 18:15:09,484 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33689 2023-07-19 18:15:09,485 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33689 2023-07-19 18:15:09,485 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33689 2023-07-19 18:15:09,487 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:15:09,487 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:15:09,487 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:15:09,488 INFO [Listener at localhost/41015] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 18:15:09,488 INFO [Listener at localhost/41015] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:15:09,488 INFO [Listener at localhost/41015] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:15:09,488 INFO [Listener at localhost/41015] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:15:09,489 INFO [Listener at localhost/41015] http.HttpServer(1146): Jetty bound to port 33863 2023-07-19 18:15:09,489 INFO [Listener at localhost/41015] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:15:09,491 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,491 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3f5ab527{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:15:09,491 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,491 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2652e034{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:15:09,603 INFO [Listener at localhost/41015] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:15:09,604 INFO [Listener at localhost/41015] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:15:09,604 INFO [Listener at localhost/41015] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:15:09,605 INFO [Listener at localhost/41015] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 18:15:09,605 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,606 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5b487ac{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/java.io.tmpdir/jetty-0_0_0_0-33863-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5607597440722955522/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:09,607 INFO [Listener at localhost/41015] server.AbstractConnector(333): Started ServerConnector@41dce8ff{HTTP/1.1, (http/1.1)}{0.0.0.0:33863} 2023-07-19 18:15:09,608 INFO [Listener at localhost/41015] server.Server(415): Started @43316ms 2023-07-19 18:15:09,619 INFO [Listener at localhost/41015] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:15:09,619 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,619 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,620 INFO [Listener at localhost/41015] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:15:09,620 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,620 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:15:09,620 INFO [Listener at localhost/41015] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:15:09,621 INFO [Listener at localhost/41015] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41789 2023-07-19 18:15:09,621 INFO [Listener at localhost/41015] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 18:15:09,629 DEBUG [Listener at localhost/41015] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 18:15:09,629 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:09,630 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:09,631 INFO [Listener at localhost/41015] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41789 connecting to ZooKeeper ensemble=127.0.0.1:50044 2023-07-19 18:15:09,636 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:417890x0, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:15:09,637 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41789-0x1017ecb73ea0002 connected 2023-07-19 18:15:09,637 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:15:09,638 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:09,638 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:15:09,639 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41789 2023-07-19 18:15:09,639 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41789 2023-07-19 18:15:09,639 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41789 2023-07-19 18:15:09,639 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41789 2023-07-19 18:15:09,640 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41789 2023-07-19 18:15:09,641 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:15:09,642 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:15:09,642 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:15:09,642 INFO [Listener at localhost/41015] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 18:15:09,642 INFO [Listener at localhost/41015] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:15:09,642 INFO [Listener at localhost/41015] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:15:09,642 INFO [Listener at localhost/41015] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:15:09,643 INFO [Listener at localhost/41015] http.HttpServer(1146): Jetty bound to port 36633 2023-07-19 18:15:09,643 INFO [Listener at localhost/41015] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:15:09,644 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,644 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@737b34c2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:15:09,645 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,645 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@743d266c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:15:09,756 INFO [Listener at localhost/41015] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:15:09,757 INFO [Listener at localhost/41015] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:15:09,757 INFO [Listener at localhost/41015] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:15:09,758 INFO [Listener at localhost/41015] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 18:15:09,758 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,759 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3c8603ce{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/java.io.tmpdir/jetty-0_0_0_0-36633-hbase-server-2_4_18-SNAPSHOT_jar-_-any-225686308796166879/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:09,761 INFO [Listener at localhost/41015] server.AbstractConnector(333): Started ServerConnector@6a7c0e78{HTTP/1.1, (http/1.1)}{0.0.0.0:36633} 2023-07-19 18:15:09,762 INFO [Listener at localhost/41015] server.Server(415): Started @43470ms 2023-07-19 18:15:09,773 INFO [Listener at localhost/41015] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:15:09,773 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,773 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,773 INFO [Listener at localhost/41015] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:15:09,773 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:09,773 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:15:09,774 INFO [Listener at localhost/41015] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:15:09,774 INFO [Listener at localhost/41015] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36501 2023-07-19 18:15:09,775 INFO [Listener at localhost/41015] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 18:15:09,776 DEBUG [Listener at localhost/41015] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 18:15:09,776 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:09,777 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:09,778 INFO [Listener at localhost/41015] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36501 connecting to ZooKeeper ensemble=127.0.0.1:50044 2023-07-19 18:15:09,781 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:365010x0, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:15:09,782 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): regionserver:365010x0, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:15:09,783 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36501-0x1017ecb73ea0003 connected 2023-07-19 18:15:09,783 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:09,784 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:15:09,784 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36501 2023-07-19 18:15:09,784 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36501 2023-07-19 18:15:09,784 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36501 2023-07-19 18:15:09,785 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36501 2023-07-19 18:15:09,785 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36501 2023-07-19 18:15:09,786 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:15:09,787 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:15:09,787 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:15:09,787 INFO [Listener at localhost/41015] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 18:15:09,787 INFO [Listener at localhost/41015] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:15:09,787 INFO [Listener at localhost/41015] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:15:09,787 INFO [Listener at localhost/41015] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:15:09,788 INFO [Listener at localhost/41015] http.HttpServer(1146): Jetty bound to port 35187 2023-07-19 18:15:09,788 INFO [Listener at localhost/41015] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:15:09,789 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,790 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@df82292{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:15:09,790 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,790 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2648f553{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:15:09,904 INFO [Listener at localhost/41015] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:15:09,904 INFO [Listener at localhost/41015] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:15:09,904 INFO [Listener at localhost/41015] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:15:09,905 INFO [Listener at localhost/41015] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 18:15:09,905 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:09,906 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@192c0497{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/java.io.tmpdir/jetty-0_0_0_0-35187-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3760660742590410569/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:09,907 INFO [Listener at localhost/41015] server.AbstractConnector(333): Started ServerConnector@2247058a{HTTP/1.1, (http/1.1)}{0.0.0.0:35187} 2023-07-19 18:15:09,908 INFO [Listener at localhost/41015] server.Server(415): Started @43616ms 2023-07-19 18:15:09,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:15:09,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@23e4505e{HTTP/1.1, (http/1.1)}{0.0.0.0:44967} 2023-07-19 18:15:09,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @43622ms 2023-07-19 18:15:09,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,39305,1689790509306 2023-07-19 18:15:09,915 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 18:15:09,916 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,39305,1689790509306 2023-07-19 18:15:09,917 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:15:09,917 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:15:09,917 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:15:09,917 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 18:15:09,918 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:09,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 18:15:09,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,39305,1689790509306 from backup master directory 2023-07-19 18:15:09,921 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 18:15:09,923 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,39305,1689790509306 2023-07-19 18:15:09,923 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:15:09,923 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 18:15:09,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,39305,1689790509306 2023-07-19 18:15:09,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/hbase.id with ID: 7dd08696-67d7-4cc2-9662-0bc6a1d2f95e 2023-07-19 18:15:09,963 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:09,965 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:09,978 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7995a4c1 to 127.0.0.1:50044 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:09,984 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2aef0394, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:09,985 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:09,985 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-19 18:15:09,985 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:15:09,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/data/master/store-tmp 2023-07-19 18:15:09,995 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:09,995 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 18:15:09,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:09,995 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:09,995 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 18:15:09,995 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:09,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:09,995 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 18:15:09,996 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/WALs/jenkins-hbase4.apache.org,39305,1689790509306 2023-07-19 18:15:09,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39305%2C1689790509306, suffix=, logDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/WALs/jenkins-hbase4.apache.org,39305,1689790509306, archiveDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/oldWALs, maxLogs=10 2023-07-19 18:15:10,014 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK] 2023-07-19 18:15:10,015 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK] 2023-07-19 18:15:10,014 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK] 2023-07-19 18:15:10,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/WALs/jenkins-hbase4.apache.org,39305,1689790509306/jenkins-hbase4.apache.org%2C39305%2C1689790509306.1689790509999 2023-07-19 18:15:10,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK], DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK], DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK]] 2023-07-19 18:15:10,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:15:10,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:10,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:10,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:10,019 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:10,021 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-19 18:15:10,021 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-19 18:15:10,022 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:10,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:10,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:10,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 18:15:10,027 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:10,027 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11597056480, jitterRate=0.08006004989147186}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:10,027 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 18:15:10,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-19 18:15:10,029 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-19 18:15:10,029 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-19 18:15:10,029 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-19 18:15:10,029 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-19 18:15:10,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-19 18:15:10,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-19 18:15:10,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-19 18:15:10,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-19 18:15:10,033 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-19 18:15:10,033 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-19 18:15:10,033 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-19 18:15:10,035 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:10,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-19 18:15:10,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-19 18:15:10,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-19 18:15:10,040 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:10,040 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:10,040 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:10,040 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:10,040 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:10,040 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,39305,1689790509306, sessionid=0x1017ecb73ea0000, setting cluster-up flag (Was=false) 2023-07-19 18:15:10,045 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:10,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-19 18:15:10,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39305,1689790509306 2023-07-19 18:15:10,054 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:10,059 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-19 18:15:10,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39305,1689790509306 2023-07-19 18:15:10,061 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.hbase-snapshot/.tmp 2023-07-19 18:15:10,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-19 18:15:10,062 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-19 18:15:10,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-19 18:15:10,063 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:15:10,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-19 18:15:10,064 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-19 18:15:10,078 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 18:15:10,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 18:15:10,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 18:15:10,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 18:15:10,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:15:10,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:15:10,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:15:10,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 18:15:10,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-19 18:15:10,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:15:10,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,081 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689790540081 2023-07-19 18:15:10,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-19 18:15:10,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-19 18:15:10,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-19 18:15:10,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-19 18:15:10,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-19 18:15:10,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-19 18:15:10,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,083 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 18:15:10,083 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-19 18:15:10,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-19 18:15:10,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-19 18:15:10,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-19 18:15:10,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-19 18:15:10,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-19 18:15:10,084 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689790510084,5,FailOnTimeoutGroup] 2023-07-19 18:15:10,084 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689790510084,5,FailOnTimeoutGroup] 2023-07-19 18:15:10,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-19 18:15:10,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,085 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:10,109 INFO [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(951): ClusterId : 7dd08696-67d7-4cc2-9662-0bc6a1d2f95e 2023-07-19 18:15:10,112 DEBUG [RS:0;jenkins-hbase4:33689] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 18:15:10,115 INFO [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(951): ClusterId : 7dd08696-67d7-4cc2-9662-0bc6a1d2f95e 2023-07-19 18:15:10,115 INFO [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(951): ClusterId : 7dd08696-67d7-4cc2-9662-0bc6a1d2f95e 2023-07-19 18:15:10,116 DEBUG [RS:1;jenkins-hbase4:41789] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 18:15:10,117 DEBUG [RS:2;jenkins-hbase4:36501] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 18:15:10,117 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:10,117 DEBUG [RS:0;jenkins-hbase4:33689] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 18:15:10,118 DEBUG [RS:0;jenkins-hbase4:33689] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 18:15:10,118 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:10,118 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8 2023-07-19 18:15:10,121 DEBUG [RS:1;jenkins-hbase4:41789] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 18:15:10,121 DEBUG [RS:1;jenkins-hbase4:41789] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 18:15:10,121 DEBUG [RS:0;jenkins-hbase4:33689] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 18:15:10,122 DEBUG [RS:2;jenkins-hbase4:36501] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 18:15:10,122 DEBUG [RS:2;jenkins-hbase4:36501] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 18:15:10,122 DEBUG [RS:0;jenkins-hbase4:33689] zookeeper.ReadOnlyZKClient(139): Connect 0x341978eb to 127.0.0.1:50044 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:10,124 DEBUG [RS:1;jenkins-hbase4:41789] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 18:15:10,124 DEBUG [RS:2;jenkins-hbase4:36501] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 18:15:10,127 DEBUG [RS:1;jenkins-hbase4:41789] zookeeper.ReadOnlyZKClient(139): Connect 0x5e983e41 to 127.0.0.1:50044 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:10,127 DEBUG [RS:2;jenkins-hbase4:36501] zookeeper.ReadOnlyZKClient(139): Connect 0x3d4df155 to 127.0.0.1:50044 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:10,134 DEBUG [RS:0;jenkins-hbase4:33689] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ade9018, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:10,134 DEBUG [RS:0;jenkins-hbase4:33689] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ee1139b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:15:10,136 DEBUG [RS:1;jenkins-hbase4:41789] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@59bc7e5c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:10,136 DEBUG [RS:1;jenkins-hbase4:41789] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f8c87f5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:15:10,138 DEBUG [RS:2;jenkins-hbase4:36501] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@776a40ff, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:10,138 DEBUG [RS:2;jenkins-hbase4:36501] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cba6ef7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:15:10,144 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:10,144 DEBUG [RS:0;jenkins-hbase4:33689] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33689 2023-07-19 18:15:10,144 INFO [RS:0;jenkins-hbase4:33689] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 18:15:10,144 INFO [RS:0;jenkins-hbase4:33689] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 18:15:10,144 DEBUG [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 18:15:10,145 INFO [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39305,1689790509306 with isa=jenkins-hbase4.apache.org/172.31.14.131:33689, startcode=1689790509470 2023-07-19 18:15:10,145 DEBUG [RS:0;jenkins-hbase4:33689] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 18:15:10,145 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 18:15:10,147 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/info 2023-07-19 18:15:10,147 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34685, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 18:15:10,147 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 18:15:10,149 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39305] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:10,149 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:15:10,149 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-19 18:15:10,150 DEBUG [RS:1;jenkins-hbase4:41789] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:41789 2023-07-19 18:15:10,150 DEBUG [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8 2023-07-19 18:15:10,150 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:10,150 INFO [RS:1;jenkins-hbase4:41789] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 18:15:10,150 INFO [RS:1;jenkins-hbase4:41789] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 18:15:10,150 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 18:15:10,150 DEBUG [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37897 2023-07-19 18:15:10,150 DEBUG [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 18:15:10,150 DEBUG [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45673 2023-07-19 18:15:10,151 INFO [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39305,1689790509306 with isa=jenkins-hbase4.apache.org/172.31.14.131:41789, startcode=1689790509619 2023-07-19 18:15:10,151 DEBUG [RS:1;jenkins-hbase4:41789] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 18:15:10,151 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/rep_barrier 2023-07-19 18:15:10,152 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:10,152 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 18:15:10,152 DEBUG [RS:2;jenkins-hbase4:36501] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:36501 2023-07-19 18:15:10,152 INFO [RS:2;jenkins-hbase4:36501] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 18:15:10,152 INFO [RS:2;jenkins-hbase4:36501] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 18:15:10,152 DEBUG [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 18:15:10,152 DEBUG [RS:0;jenkins-hbase4:33689] zookeeper.ZKUtil(162): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:10,152 WARN [RS:0;jenkins-hbase4:33689] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:15:10,152 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42375, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 18:15:10,152 INFO [RS:0;jenkins-hbase4:33689] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:15:10,153 INFO [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39305,1689790509306 with isa=jenkins-hbase4.apache.org/172.31.14.131:36501, startcode=1689790509773 2023-07-19 18:15:10,153 DEBUG [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:10,153 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39305] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,153 DEBUG [RS:2;jenkins-hbase4:36501] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 18:15:10,153 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:15:10,153 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-19 18:15:10,153 DEBUG [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8 2023-07-19 18:15:10,153 DEBUG [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37897 2023-07-19 18:15:10,153 DEBUG [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45673 2023-07-19 18:15:10,154 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43581, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 18:15:10,154 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39305] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:10,154 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:15:10,154 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-19 18:15:10,154 DEBUG [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8 2023-07-19 18:15:10,155 DEBUG [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37897 2023-07-19 18:15:10,155 DEBUG [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45673 2023-07-19 18:15:10,155 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:10,155 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 18:15:10,162 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/table 2023-07-19 18:15:10,162 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 18:15:10,163 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:10,163 DEBUG [RS:1;jenkins-hbase4:41789] zookeeper.ZKUtil(162): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,163 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41789,1689790509619] 2023-07-19 18:15:10,163 WARN [RS:1;jenkins-hbase4:41789] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:15:10,163 DEBUG [RS:2;jenkins-hbase4:36501] zookeeper.ZKUtil(162): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:10,163 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36501,1689790509773] 2023-07-19 18:15:10,163 WARN [RS:2;jenkins-hbase4:36501] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:15:10,163 INFO [RS:1;jenkins-hbase4:41789] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:15:10,164 INFO [RS:2;jenkins-hbase4:36501] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:15:10,164 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33689,1689790509470] 2023-07-19 18:15:10,164 DEBUG [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,164 DEBUG [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:10,164 DEBUG [RS:0;jenkins-hbase4:33689] zookeeper.ZKUtil(162): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,165 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740 2023-07-19 18:15:10,165 DEBUG [RS:0;jenkins-hbase4:33689] zookeeper.ZKUtil(162): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:10,165 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740 2023-07-19 18:15:10,166 DEBUG [RS:0;jenkins-hbase4:33689] zookeeper.ZKUtil(162): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:10,172 DEBUG [RS:0;jenkins-hbase4:33689] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 18:15:10,172 INFO [RS:0;jenkins-hbase4:33689] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 18:15:10,173 DEBUG [RS:2;jenkins-hbase4:36501] zookeeper.ZKUtil(162): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,173 DEBUG [RS:1;jenkins-hbase4:41789] zookeeper.ZKUtil(162): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,174 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 18:15:10,174 INFO [RS:0;jenkins-hbase4:33689] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 18:15:10,174 DEBUG [RS:2;jenkins-hbase4:36501] zookeeper.ZKUtil(162): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:10,174 INFO [RS:0;jenkins-hbase4:33689] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 18:15:10,174 DEBUG [RS:2;jenkins-hbase4:36501] zookeeper.ZKUtil(162): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:10,174 DEBUG [RS:1;jenkins-hbase4:41789] zookeeper.ZKUtil(162): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:10,174 INFO [RS:0;jenkins-hbase4:33689] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,175 DEBUG [RS:2;jenkins-hbase4:36501] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 18:15:10,175 DEBUG [RS:1;jenkins-hbase4:41789] zookeeper.ZKUtil(162): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:10,175 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 18:15:10,175 INFO [RS:2;jenkins-hbase4:36501] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 18:15:10,176 DEBUG [RS:1;jenkins-hbase4:41789] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 18:15:10,176 INFO [RS:1;jenkins-hbase4:41789] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 18:15:10,178 INFO [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 18:15:10,182 INFO [RS:1;jenkins-hbase4:41789] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 18:15:10,182 INFO [RS:2;jenkins-hbase4:36501] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 18:15:10,183 INFO [RS:1;jenkins-hbase4:41789] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 18:15:10,183 INFO [RS:1;jenkins-hbase4:41789] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,183 INFO [RS:0;jenkins-hbase4:33689] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,183 INFO [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 18:15:10,183 INFO [RS:2;jenkins-hbase4:36501] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 18:15:10,184 DEBUG [RS:0;jenkins-hbase4:33689] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,185 INFO [RS:2;jenkins-hbase4:36501] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,185 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:10,185 DEBUG [RS:0;jenkins-hbase4:33689] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,185 DEBUG [RS:0;jenkins-hbase4:33689] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,185 DEBUG [RS:0;jenkins-hbase4:33689] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,186 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10959090560, jitterRate=0.020644843578338623}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 18:15:10,186 DEBUG [RS:0;jenkins-hbase4:33689] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,186 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 18:15:10,186 DEBUG [RS:0;jenkins-hbase4:33689] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:15:10,186 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 18:15:10,186 DEBUG [RS:0;jenkins-hbase4:33689] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,186 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 18:15:10,186 DEBUG [RS:0;jenkins-hbase4:33689] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,186 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 18:15:10,186 DEBUG [RS:0;jenkins-hbase4:33689] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,186 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 18:15:10,186 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 18:15:10,186 DEBUG [RS:0;jenkins-hbase4:33689] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,190 INFO [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 18:15:10,191 INFO [RS:1;jenkins-hbase4:41789] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,195 DEBUG [RS:1;jenkins-hbase4:41789] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,195 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 18:15:10,195 DEBUG [RS:1;jenkins-hbase4:41789] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,195 INFO [RS:0;jenkins-hbase4:33689] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,195 DEBUG [RS:1;jenkins-hbase4:41789] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,195 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 18:15:10,196 DEBUG [RS:1;jenkins-hbase4:41789] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,195 INFO [RS:0;jenkins-hbase4:33689] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,196 DEBUG [RS:1;jenkins-hbase4:41789] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,196 INFO [RS:2;jenkins-hbase4:36501] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,196 DEBUG [RS:1;jenkins-hbase4:41789] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:15:10,196 INFO [RS:0;jenkins-hbase4:33689] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,197 DEBUG [RS:1;jenkins-hbase4:41789] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,197 DEBUG [RS:2;jenkins-hbase4:36501] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,197 DEBUG [RS:1;jenkins-hbase4:41789] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,197 DEBUG [RS:2;jenkins-hbase4:36501] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,197 DEBUG [RS:1;jenkins-hbase4:41789] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,197 DEBUG [RS:2;jenkins-hbase4:36501] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,197 DEBUG [RS:1;jenkins-hbase4:41789] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,197 DEBUG [RS:2;jenkins-hbase4:36501] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,198 DEBUG [RS:2;jenkins-hbase4:36501] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,198 DEBUG [RS:2;jenkins-hbase4:36501] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:15:10,198 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 18:15:10,198 DEBUG [RS:2;jenkins-hbase4:36501] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,198 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-19 18:15:10,198 DEBUG [RS:2;jenkins-hbase4:36501] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,198 DEBUG [RS:2;jenkins-hbase4:36501] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,198 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-19 18:15:10,198 DEBUG [RS:2;jenkins-hbase4:36501] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:10,202 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-19 18:15:10,203 INFO [RS:2;jenkins-hbase4:36501] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,203 INFO [RS:2;jenkins-hbase4:36501] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,203 INFO [RS:2;jenkins-hbase4:36501] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,207 INFO [RS:1;jenkins-hbase4:41789] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,208 INFO [RS:1;jenkins-hbase4:41789] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,208 INFO [RS:1;jenkins-hbase4:41789] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,208 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-19 18:15:10,215 INFO [RS:0;jenkins-hbase4:33689] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 18:15:10,215 INFO [RS:0;jenkins-hbase4:33689] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33689,1689790509470-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,218 INFO [RS:2;jenkins-hbase4:36501] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 18:15:10,218 INFO [RS:2;jenkins-hbase4:36501] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36501,1689790509773-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,221 INFO [RS:1;jenkins-hbase4:41789] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 18:15:10,222 INFO [RS:1;jenkins-hbase4:41789] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41789,1689790509619-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,230 INFO [RS:2;jenkins-hbase4:36501] regionserver.Replication(203): jenkins-hbase4.apache.org,36501,1689790509773 started 2023-07-19 18:15:10,230 INFO [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36501,1689790509773, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36501, sessionid=0x1017ecb73ea0003 2023-07-19 18:15:10,230 DEBUG [RS:2;jenkins-hbase4:36501] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 18:15:10,230 DEBUG [RS:2;jenkins-hbase4:36501] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:10,230 DEBUG [RS:2;jenkins-hbase4:36501] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36501,1689790509773' 2023-07-19 18:15:10,230 DEBUG [RS:2;jenkins-hbase4:36501] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 18:15:10,231 DEBUG [RS:2;jenkins-hbase4:36501] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 18:15:10,231 DEBUG [RS:2;jenkins-hbase4:36501] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 18:15:10,231 DEBUG [RS:2;jenkins-hbase4:36501] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 18:15:10,231 DEBUG [RS:2;jenkins-hbase4:36501] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:10,231 DEBUG [RS:2;jenkins-hbase4:36501] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36501,1689790509773' 2023-07-19 18:15:10,231 DEBUG [RS:2;jenkins-hbase4:36501] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 18:15:10,231 DEBUG [RS:2;jenkins-hbase4:36501] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 18:15:10,232 DEBUG [RS:2;jenkins-hbase4:36501] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 18:15:10,232 INFO [RS:2;jenkins-hbase4:36501] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 18:15:10,232 INFO [RS:2;jenkins-hbase4:36501] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 18:15:10,232 INFO [RS:0;jenkins-hbase4:33689] regionserver.Replication(203): jenkins-hbase4.apache.org,33689,1689790509470 started 2023-07-19 18:15:10,232 INFO [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33689,1689790509470, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33689, sessionid=0x1017ecb73ea0001 2023-07-19 18:15:10,232 DEBUG [RS:0;jenkins-hbase4:33689] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 18:15:10,232 DEBUG [RS:0;jenkins-hbase4:33689] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:10,232 DEBUG [RS:0;jenkins-hbase4:33689] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33689,1689790509470' 2023-07-19 18:15:10,232 DEBUG [RS:0;jenkins-hbase4:33689] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 18:15:10,233 DEBUG [RS:0;jenkins-hbase4:33689] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 18:15:10,233 DEBUG [RS:0;jenkins-hbase4:33689] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 18:15:10,233 DEBUG [RS:0;jenkins-hbase4:33689] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 18:15:10,233 DEBUG [RS:0;jenkins-hbase4:33689] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:10,233 DEBUG [RS:0;jenkins-hbase4:33689] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33689,1689790509470' 2023-07-19 18:15:10,233 DEBUG [RS:0;jenkins-hbase4:33689] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 18:15:10,234 DEBUG [RS:0;jenkins-hbase4:33689] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 18:15:10,234 DEBUG [RS:0;jenkins-hbase4:33689] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 18:15:10,234 INFO [RS:0;jenkins-hbase4:33689] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 18:15:10,234 INFO [RS:0;jenkins-hbase4:33689] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 18:15:10,235 INFO [RS:1;jenkins-hbase4:41789] regionserver.Replication(203): jenkins-hbase4.apache.org,41789,1689790509619 started 2023-07-19 18:15:10,235 INFO [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41789,1689790509619, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41789, sessionid=0x1017ecb73ea0002 2023-07-19 18:15:10,235 DEBUG [RS:1;jenkins-hbase4:41789] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 18:15:10,235 DEBUG [RS:1;jenkins-hbase4:41789] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,235 DEBUG [RS:1;jenkins-hbase4:41789] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41789,1689790509619' 2023-07-19 18:15:10,236 DEBUG [RS:1;jenkins-hbase4:41789] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 18:15:10,236 DEBUG [RS:1;jenkins-hbase4:41789] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 18:15:10,236 DEBUG [RS:1;jenkins-hbase4:41789] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 18:15:10,236 DEBUG [RS:1;jenkins-hbase4:41789] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 18:15:10,236 DEBUG [RS:1;jenkins-hbase4:41789] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,236 DEBUG [RS:1;jenkins-hbase4:41789] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41789,1689790509619' 2023-07-19 18:15:10,236 DEBUG [RS:1;jenkins-hbase4:41789] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 18:15:10,237 DEBUG [RS:1;jenkins-hbase4:41789] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 18:15:10,237 DEBUG [RS:1;jenkins-hbase4:41789] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 18:15:10,237 INFO [RS:1;jenkins-hbase4:41789] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 18:15:10,237 INFO [RS:1;jenkins-hbase4:41789] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 18:15:10,334 INFO [RS:2;jenkins-hbase4:36501] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36501%2C1689790509773, suffix=, logDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,36501,1689790509773, archiveDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/oldWALs, maxLogs=32 2023-07-19 18:15:10,335 INFO [RS:0;jenkins-hbase4:33689] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33689%2C1689790509470, suffix=, logDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,33689,1689790509470, archiveDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/oldWALs, maxLogs=32 2023-07-19 18:15:10,338 INFO [RS:1;jenkins-hbase4:41789] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41789%2C1689790509619, suffix=, logDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,41789,1689790509619, archiveDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/oldWALs, maxLogs=32 2023-07-19 18:15:10,359 DEBUG [jenkins-hbase4:39305] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-19 18:15:10,360 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK] 2023-07-19 18:15:10,360 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK] 2023-07-19 18:15:10,360 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK] 2023-07-19 18:15:10,363 DEBUG [jenkins-hbase4:39305] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:15:10,363 DEBUG [jenkins-hbase4:39305] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:15:10,363 DEBUG [jenkins-hbase4:39305] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:15:10,363 DEBUG [jenkins-hbase4:39305] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:15:10,363 DEBUG [jenkins-hbase4:39305] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:15:10,367 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK] 2023-07-19 18:15:10,370 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK] 2023-07-19 18:15:10,370 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41789,1689790509619, state=OPENING 2023-07-19 18:15:10,371 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK] 2023-07-19 18:15:10,371 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK] 2023-07-19 18:15:10,371 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK] 2023-07-19 18:15:10,371 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK] 2023-07-19 18:15:10,373 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-19 18:15:10,381 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:10,382 WARN [ReadOnlyZKClient-127.0.0.1:50044@0x7995a4c1] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-19 18:15:10,382 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41789,1689790509619}] 2023-07-19 18:15:10,382 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 18:15:10,382 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39305,1689790509306] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:15:10,388 INFO [RS:2;jenkins-hbase4:36501] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,36501,1689790509773/jenkins-hbase4.apache.org%2C36501%2C1689790509773.1689790510334 2023-07-19 18:15:10,389 INFO [RS:0;jenkins-hbase4:33689] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,33689,1689790509470/jenkins-hbase4.apache.org%2C33689%2C1689790509470.1689790510336 2023-07-19 18:15:10,388 INFO [RS:1;jenkins-hbase4:41789] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,41789,1689790509619/jenkins-hbase4.apache.org%2C41789%2C1689790509619.1689790510339 2023-07-19 18:15:10,390 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55516, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:15:10,398 DEBUG [RS:1;jenkins-hbase4:41789] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK], DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK], DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK]] 2023-07-19 18:15:10,399 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41789] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:55516 deadline: 1689790570390, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,399 DEBUG [RS:2;jenkins-hbase4:36501] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK], DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK], DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK]] 2023-07-19 18:15:10,399 DEBUG [RS:0;jenkins-hbase4:33689] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK], DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK], DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK]] 2023-07-19 18:15:10,542 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,544 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:15:10,546 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55532, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:15:10,550 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 18:15:10,550 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:15:10,552 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41789%2C1689790509619.meta, suffix=.meta, logDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,41789,1689790509619, archiveDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/oldWALs, maxLogs=32 2023-07-19 18:15:10,572 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK] 2023-07-19 18:15:10,573 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK] 2023-07-19 18:15:10,584 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK] 2023-07-19 18:15:10,603 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,41789,1689790509619/jenkins-hbase4.apache.org%2C41789%2C1689790509619.meta.1689790510552.meta 2023-07-19 18:15:10,603 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK], DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK], DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK]] 2023-07-19 18:15:10,603 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:15:10,603 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 18:15:10,604 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 18:15:10,604 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 18:15:10,604 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 18:15:10,604 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:10,604 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 18:15:10,604 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 18:15:10,605 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 18:15:10,606 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-19 18:15:10,607 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/info 2023-07-19 18:15:10,607 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/info 2023-07-19 18:15:10,607 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 18:15:10,610 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:10,610 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 18:15:10,614 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/rep_barrier 2023-07-19 18:15:10,614 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/rep_barrier 2023-07-19 18:15:10,615 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 18:15:10,617 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:10,617 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 18:15:10,619 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/table 2023-07-19 18:15:10,619 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/table 2023-07-19 18:15:10,620 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 18:15:10,622 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:10,624 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740 2023-07-19 18:15:10,627 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740 2023-07-19 18:15:10,636 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 18:15:10,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 18:15:10,642 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9499581760, jitterRate=-0.11528250575065613}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 18:15:10,642 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 18:15:10,643 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689790510542 2023-07-19 18:15:10,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 18:15:10,647 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 18:15:10,647 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41789,1689790509619, state=OPEN 2023-07-19 18:15:10,649 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 18:15:10,649 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 18:15:10,650 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-19 18:15:10,650 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41789,1689790509619 in 267 msec 2023-07-19 18:15:10,652 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-19 18:15:10,652 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 452 msec 2023-07-19 18:15:10,654 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 590 msec 2023-07-19 18:15:10,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689790510654, completionTime=-1 2023-07-19 18:15:10,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-19 18:15:10,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-19 18:15:10,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-19 18:15:10,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689790570659 2023-07-19 18:15:10,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689790630659 2023-07-19 18:15:10,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-19 18:15:10,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39305,1689790509306-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39305,1689790509306-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39305,1689790509306-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:39305, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:10,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-19 18:15:10,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:10,671 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-19 18:15:10,673 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:15:10,674 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-19 18:15:10,674 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:15:10,677 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/hbase/namespace/2968131a1076392f3ff887a5705b7862 2023-07-19 18:15:10,678 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/hbase/namespace/2968131a1076392f3ff887a5705b7862 empty. 2023-07-19 18:15:10,678 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/hbase/namespace/2968131a1076392f3ff887a5705b7862 2023-07-19 18:15:10,678 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-19 18:15:10,695 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:10,697 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2968131a1076392f3ff887a5705b7862, NAME => 'hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp 2023-07-19 18:15:10,701 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39305,1689790509306] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:10,703 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39305,1689790509306] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-19 18:15:10,705 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:15:10,706 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:15:10,707 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e 2023-07-19 18:15:10,708 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e empty. 2023-07-19 18:15:10,709 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e 2023-07-19 18:15:10,709 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-19 18:15:10,723 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:10,723 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 2968131a1076392f3ff887a5705b7862, disabling compactions & flushes 2023-07-19 18:15:10,723 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:10,723 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:10,723 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. after waiting 0 ms 2023-07-19 18:15:10,723 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:10,723 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:10,723 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 2968131a1076392f3ff887a5705b7862: 2023-07-19 18:15:10,725 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:15:10,726 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790510726"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790510726"}]},"ts":"1689790510726"} 2023-07-19 18:15:10,729 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:15:10,729 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:15:10,730 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790510730"}]},"ts":"1689790510730"} 2023-07-19 18:15:10,730 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:10,731 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-19 18:15:10,731 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => a8ca1f3882e62958c7ca91ce3cbb2d8e, NAME => 'hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp 2023-07-19 18:15:10,735 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:15:10,735 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:15:10,735 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:15:10,735 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:15:10,735 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:15:10,735 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2968131a1076392f3ff887a5705b7862, ASSIGN}] 2023-07-19 18:15:10,736 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2968131a1076392f3ff887a5705b7862, ASSIGN 2023-07-19 18:15:10,737 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2968131a1076392f3ff887a5705b7862, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41789,1689790509619; forceNewPlan=false, retain=false 2023-07-19 18:15:10,741 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:10,741 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing a8ca1f3882e62958c7ca91ce3cbb2d8e, disabling compactions & flushes 2023-07-19 18:15:10,741 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:10,741 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:10,741 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. after waiting 0 ms 2023-07-19 18:15:10,741 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:10,741 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:10,741 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for a8ca1f3882e62958c7ca91ce3cbb2d8e: 2023-07-19 18:15:10,743 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:15:10,744 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790510744"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790510744"}]},"ts":"1689790510744"} 2023-07-19 18:15:10,745 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:15:10,745 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:15:10,746 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790510746"}]},"ts":"1689790510746"} 2023-07-19 18:15:10,747 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-19 18:15:10,750 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:15:10,751 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:15:10,751 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:15:10,751 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:15:10,751 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:15:10,751 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a8ca1f3882e62958c7ca91ce3cbb2d8e, ASSIGN}] 2023-07-19 18:15:10,753 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a8ca1f3882e62958c7ca91ce3cbb2d8e, ASSIGN 2023-07-19 18:15:10,755 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=a8ca1f3882e62958c7ca91ce3cbb2d8e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33689,1689790509470; forceNewPlan=false, retain=false 2023-07-19 18:15:10,755 INFO [jenkins-hbase4:39305] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-19 18:15:10,757 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=2968131a1076392f3ff887a5705b7862, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,757 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790510757"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790510757"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790510757"}]},"ts":"1689790510757"} 2023-07-19 18:15:10,757 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=a8ca1f3882e62958c7ca91ce3cbb2d8e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:10,758 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790510757"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790510757"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790510757"}]},"ts":"1689790510757"} 2023-07-19 18:15:10,759 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 2968131a1076392f3ff887a5705b7862, server=jenkins-hbase4.apache.org,41789,1689790509619}] 2023-07-19 18:15:10,761 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure a8ca1f3882e62958c7ca91ce3cbb2d8e, server=jenkins-hbase4.apache.org,33689,1689790509470}] 2023-07-19 18:15:10,913 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:10,914 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 18:15:10,915 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:10,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2968131a1076392f3ff887a5705b7862, NAME => 'hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:15:10,915 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34868, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 18:15:10,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2968131a1076392f3ff887a5705b7862 2023-07-19 18:15:10,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:10,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2968131a1076392f3ff887a5705b7862 2023-07-19 18:15:10,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2968131a1076392f3ff887a5705b7862 2023-07-19 18:15:10,917 INFO [StoreOpener-2968131a1076392f3ff887a5705b7862-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2968131a1076392f3ff887a5705b7862 2023-07-19 18:15:10,919 DEBUG [StoreOpener-2968131a1076392f3ff887a5705b7862-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/namespace/2968131a1076392f3ff887a5705b7862/info 2023-07-19 18:15:10,919 DEBUG [StoreOpener-2968131a1076392f3ff887a5705b7862-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/namespace/2968131a1076392f3ff887a5705b7862/info 2023-07-19 18:15:10,919 INFO [StoreOpener-2968131a1076392f3ff887a5705b7862-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2968131a1076392f3ff887a5705b7862 columnFamilyName info 2023-07-19 18:15:10,919 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:10,919 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a8ca1f3882e62958c7ca91ce3cbb2d8e, NAME => 'hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:15:10,919 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 18:15:10,919 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. service=MultiRowMutationService 2023-07-19 18:15:10,919 INFO [StoreOpener-2968131a1076392f3ff887a5705b7862-1] regionserver.HStore(310): Store=2968131a1076392f3ff887a5705b7862/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:10,920 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 18:15:10,920 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup a8ca1f3882e62958c7ca91ce3cbb2d8e 2023-07-19 18:15:10,920 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:10,920 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a8ca1f3882e62958c7ca91ce3cbb2d8e 2023-07-19 18:15:10,920 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a8ca1f3882e62958c7ca91ce3cbb2d8e 2023-07-19 18:15:10,920 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/namespace/2968131a1076392f3ff887a5705b7862 2023-07-19 18:15:10,921 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/namespace/2968131a1076392f3ff887a5705b7862 2023-07-19 18:15:10,922 INFO [StoreOpener-a8ca1f3882e62958c7ca91ce3cbb2d8e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region a8ca1f3882e62958c7ca91ce3cbb2d8e 2023-07-19 18:15:10,923 DEBUG [StoreOpener-a8ca1f3882e62958c7ca91ce3cbb2d8e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e/m 2023-07-19 18:15:10,923 DEBUG [StoreOpener-a8ca1f3882e62958c7ca91ce3cbb2d8e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e/m 2023-07-19 18:15:10,924 INFO [StoreOpener-a8ca1f3882e62958c7ca91ce3cbb2d8e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a8ca1f3882e62958c7ca91ce3cbb2d8e columnFamilyName m 2023-07-19 18:15:10,924 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2968131a1076392f3ff887a5705b7862 2023-07-19 18:15:10,924 INFO [StoreOpener-a8ca1f3882e62958c7ca91ce3cbb2d8e-1] regionserver.HStore(310): Store=a8ca1f3882e62958c7ca91ce3cbb2d8e/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:10,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e 2023-07-19 18:15:10,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e 2023-07-19 18:15:10,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/namespace/2968131a1076392f3ff887a5705b7862/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:10,926 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2968131a1076392f3ff887a5705b7862; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10034310880, jitterRate=-0.06548197567462921}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:10,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2968131a1076392f3ff887a5705b7862: 2023-07-19 18:15:10,927 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862., pid=8, masterSystemTime=1689790510910 2023-07-19 18:15:10,928 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a8ca1f3882e62958c7ca91ce3cbb2d8e 2023-07-19 18:15:10,930 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:10,930 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:10,930 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=2968131a1076392f3ff887a5705b7862, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:10,930 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689790510930"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790510930"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790510930"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790510930"}]},"ts":"1689790510930"} 2023-07-19 18:15:10,931 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:10,931 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a8ca1f3882e62958c7ca91ce3cbb2d8e; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@d409af9, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:10,931 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a8ca1f3882e62958c7ca91ce3cbb2d8e: 2023-07-19 18:15:10,932 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e., pid=9, masterSystemTime=1689790510913 2023-07-19 18:15:10,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-19 18:15:10,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 2968131a1076392f3ff887a5705b7862, server=jenkins-hbase4.apache.org,41789,1689790509619 in 172 msec 2023-07-19 18:15:10,934 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:10,935 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:10,935 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=a8ca1f3882e62958c7ca91ce3cbb2d8e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:10,935 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689790510935"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790510935"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790510935"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790510935"}]},"ts":"1689790510935"} 2023-07-19 18:15:10,938 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-19 18:15:10,938 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-19 18:15:10,938 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure a8ca1f3882e62958c7ca91ce3cbb2d8e, server=jenkins-hbase4.apache.org,33689,1689790509470 in 176 msec 2023-07-19 18:15:10,938 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2968131a1076392f3ff887a5705b7862, ASSIGN in 199 msec 2023-07-19 18:15:10,939 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:15:10,939 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790510939"}]},"ts":"1689790510939"} 2023-07-19 18:15:10,939 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-19 18:15:10,939 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=a8ca1f3882e62958c7ca91ce3cbb2d8e, ASSIGN in 187 msec 2023-07-19 18:15:10,940 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:15:10,940 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-19 18:15:10,940 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790510940"}]},"ts":"1689790510940"} 2023-07-19 18:15:10,941 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-19 18:15:10,943 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:15:10,944 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:15:10,944 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 273 msec 2023-07-19 18:15:10,945 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 243 msec 2023-07-19 18:15:10,972 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-19 18:15:10,973 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:15:10,973 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:10,978 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-19 18:15:10,986 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:15:10,990 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-19 18:15:11,000 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-19 18:15:11,005 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39305,1689790509306] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:15:11,007 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34878, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:15:11,009 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:15:11,009 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-19 18:15:11,009 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-19 18:15:11,012 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-07-19 18:15:11,014 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:11,014 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:11,016 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 18:15:11,017 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-19 18:15:11,024 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-19 18:15:11,027 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-19 18:15:11,027 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.103sec 2023-07-19 18:15:11,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-19 18:15:11,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-19 18:15:11,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-19 18:15:11,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39305,1689790509306-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-19 18:15:11,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39305,1689790509306-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-19 18:15:11,030 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-19 18:15:11,116 DEBUG [Listener at localhost/41015] zookeeper.ReadOnlyZKClient(139): Connect 0x590c5127 to 127.0.0.1:50044 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:11,121 DEBUG [Listener at localhost/41015] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e8df158, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:11,123 DEBUG [hconnection-0x77fc54a2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:15:11,125 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55540, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:15:11,127 INFO [Listener at localhost/41015] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,39305,1689790509306 2023-07-19 18:15:11,127 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:11,130 DEBUG [Listener at localhost/41015] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-19 18:15:11,132 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39860, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-19 18:15:11,136 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-19 18:15:11,136 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:11,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-19 18:15:11,137 DEBUG [Listener at localhost/41015] zookeeper.ReadOnlyZKClient(139): Connect 0x58d21df0 to 127.0.0.1:50044 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:11,142 DEBUG [Listener at localhost/41015] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d9e1b0c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:11,142 INFO [Listener at localhost/41015] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:50044 2023-07-19 18:15:11,146 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:15:11,147 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017ecb73ea000a connected 2023-07-19 18:15:11,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:11,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:11,153 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-19 18:15:11,164 INFO [Listener at localhost/41015] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 18:15:11,164 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:11,164 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:11,164 INFO [Listener at localhost/41015] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 18:15:11,164 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 18:15:11,164 INFO [Listener at localhost/41015] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 18:15:11,165 INFO [Listener at localhost/41015] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 18:15:11,165 INFO [Listener at localhost/41015] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38273 2023-07-19 18:15:11,166 INFO [Listener at localhost/41015] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 18:15:11,167 DEBUG [Listener at localhost/41015] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 18:15:11,167 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:11,168 INFO [Listener at localhost/41015] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 18:15:11,169 INFO [Listener at localhost/41015] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38273 connecting to ZooKeeper ensemble=127.0.0.1:50044 2023-07-19 18:15:11,173 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:382730x0, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 18:15:11,175 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(162): regionserver:382730x0, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 18:15:11,175 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38273-0x1017ecb73ea000b connected 2023-07-19 18:15:11,176 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(162): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-19 18:15:11,177 DEBUG [Listener at localhost/41015] zookeeper.ZKUtil(164): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 18:15:11,177 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38273 2023-07-19 18:15:11,178 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38273 2023-07-19 18:15:11,180 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38273 2023-07-19 18:15:11,180 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38273 2023-07-19 18:15:11,181 DEBUG [Listener at localhost/41015] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38273 2023-07-19 18:15:11,182 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 18:15:11,182 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 18:15:11,183 INFO [Listener at localhost/41015] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 18:15:11,183 INFO [Listener at localhost/41015] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 18:15:11,183 INFO [Listener at localhost/41015] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 18:15:11,183 INFO [Listener at localhost/41015] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 18:15:11,183 INFO [Listener at localhost/41015] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 18:15:11,184 INFO [Listener at localhost/41015] http.HttpServer(1146): Jetty bound to port 43257 2023-07-19 18:15:11,184 INFO [Listener at localhost/41015] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 18:15:11,185 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:11,185 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@29f1e83{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.log.dir/,AVAILABLE} 2023-07-19 18:15:11,185 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:11,186 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@74c9cb11{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 18:15:11,307 INFO [Listener at localhost/41015] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 18:15:11,307 INFO [Listener at localhost/41015] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 18:15:11,307 INFO [Listener at localhost/41015] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 18:15:11,308 INFO [Listener at localhost/41015] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 18:15:11,308 INFO [Listener at localhost/41015] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 18:15:11,309 INFO [Listener at localhost/41015] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@86fd3aa{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/java.io.tmpdir/jetty-0_0_0_0-43257-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3974387937416260497/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:11,311 INFO [Listener at localhost/41015] server.AbstractConnector(333): Started ServerConnector@50f9ce59{HTTP/1.1, (http/1.1)}{0.0.0.0:43257} 2023-07-19 18:15:11,311 INFO [Listener at localhost/41015] server.Server(415): Started @45019ms 2023-07-19 18:15:11,313 INFO [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(951): ClusterId : 7dd08696-67d7-4cc2-9662-0bc6a1d2f95e 2023-07-19 18:15:11,314 DEBUG [RS:3;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 18:15:11,317 DEBUG [RS:3;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 18:15:11,317 DEBUG [RS:3;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 18:15:11,318 DEBUG [RS:3;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 18:15:11,321 DEBUG [RS:3;jenkins-hbase4:38273] zookeeper.ReadOnlyZKClient(139): Connect 0x0d3af6c2 to 127.0.0.1:50044 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 18:15:11,326 DEBUG [RS:3;jenkins-hbase4:38273] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3fd0915, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 18:15:11,326 DEBUG [RS:3;jenkins-hbase4:38273] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c29db2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:15:11,334 DEBUG [RS:3;jenkins-hbase4:38273] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:38273 2023-07-19 18:15:11,334 INFO [RS:3;jenkins-hbase4:38273] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 18:15:11,334 INFO [RS:3;jenkins-hbase4:38273] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 18:15:11,335 DEBUG [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 18:15:11,335 INFO [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39305,1689790509306 with isa=jenkins-hbase4.apache.org/172.31.14.131:38273, startcode=1689790511164 2023-07-19 18:15:11,335 DEBUG [RS:3;jenkins-hbase4:38273] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 18:15:11,337 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37135, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 18:15:11,338 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39305] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:11,338 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 18:15:11,338 DEBUG [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8 2023-07-19 18:15:11,338 DEBUG [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37897 2023-07-19 18:15:11,338 DEBUG [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45673 2023-07-19 18:15:11,345 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:11,345 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:11,345 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:11,345 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:11,345 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:11,345 DEBUG [RS:3;jenkins-hbase4:38273] zookeeper.ZKUtil(162): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:11,345 WARN [RS:3;jenkins-hbase4:38273] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 18:15:11,345 INFO [RS:3;jenkins-hbase4:38273] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 18:15:11,345 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:11,345 DEBUG [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:11,345 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:11,346 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38273,1689790511164] 2023-07-19 18:15:11,346 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 18:15:11,346 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:11,346 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:11,346 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:11,346 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:11,348 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-19 18:15:11,348 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:11,348 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:11,348 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:11,349 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:11,349 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:11,349 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:11,351 DEBUG [RS:3;jenkins-hbase4:38273] zookeeper.ZKUtil(162): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:11,351 DEBUG [RS:3;jenkins-hbase4:38273] zookeeper.ZKUtil(162): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:11,351 DEBUG [RS:3;jenkins-hbase4:38273] zookeeper.ZKUtil(162): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:11,351 DEBUG [RS:3;jenkins-hbase4:38273] zookeeper.ZKUtil(162): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:11,352 DEBUG [RS:3;jenkins-hbase4:38273] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 18:15:11,352 INFO [RS:3;jenkins-hbase4:38273] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 18:15:11,353 INFO [RS:3;jenkins-hbase4:38273] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 18:15:11,354 INFO [RS:3;jenkins-hbase4:38273] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 18:15:11,354 INFO [RS:3;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:11,354 INFO [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 18:15:11,355 INFO [RS:3;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:11,356 DEBUG [RS:3;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:11,356 DEBUG [RS:3;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:11,356 DEBUG [RS:3;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:11,356 DEBUG [RS:3;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:11,356 DEBUG [RS:3;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:11,356 DEBUG [RS:3;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 18:15:11,357 DEBUG [RS:3;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:11,357 DEBUG [RS:3;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:11,357 DEBUG [RS:3;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:11,357 DEBUG [RS:3;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 18:15:11,358 INFO [RS:3;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:11,358 INFO [RS:3;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:11,359 INFO [RS:3;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:11,369 INFO [RS:3;jenkins-hbase4:38273] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 18:15:11,369 INFO [RS:3;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38273,1689790511164-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 18:15:11,383 INFO [RS:3;jenkins-hbase4:38273] regionserver.Replication(203): jenkins-hbase4.apache.org,38273,1689790511164 started 2023-07-19 18:15:11,383 INFO [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38273,1689790511164, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38273, sessionid=0x1017ecb73ea000b 2023-07-19 18:15:11,383 DEBUG [RS:3;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 18:15:11,383 DEBUG [RS:3;jenkins-hbase4:38273] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:11,383 DEBUG [RS:3;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38273,1689790511164' 2023-07-19 18:15:11,383 DEBUG [RS:3;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 18:15:11,384 DEBUG [RS:3;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 18:15:11,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:11,384 DEBUG [RS:3;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 18:15:11,384 DEBUG [RS:3;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 18:15:11,384 DEBUG [RS:3;jenkins-hbase4:38273] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:11,384 DEBUG [RS:3;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38273,1689790511164' 2023-07-19 18:15:11,384 DEBUG [RS:3;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 18:15:11,385 DEBUG [RS:3;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 18:15:11,385 DEBUG [RS:3;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 18:15:11,385 INFO [RS:3;jenkins-hbase4:38273] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 18:15:11,385 INFO [RS:3;jenkins-hbase4:38273] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 18:15:11,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:11,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:11,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:11,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:11,392 DEBUG [hconnection-0x19454ddf-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:15:11,393 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55546, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:15:11,397 DEBUG [hconnection-0x19454ddf-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 18:15:11,399 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34886, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 18:15:11,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:11,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:11,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39305] to rsgroup master 2023-07-19 18:15:11,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:11,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:39860 deadline: 1689791711403, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. 2023-07-19 18:15:11,403 WARN [Listener at localhost/41015] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:11,405 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:11,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:11,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:11,405 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33689, jenkins-hbase4.apache.org:36501, jenkins-hbase4.apache.org:38273, jenkins-hbase4.apache.org:41789], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:11,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:11,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:11,456 INFO [Listener at localhost/41015] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=567 (was 515) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@4ae9f274 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:37897 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:33689 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:37897 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 0 on default port 37897 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x7995a4c1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1502525435.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-286785144_17 at /127.0.0.1:37538 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:37897 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36501 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp623902554-2323 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@50797a0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8-prefix:jenkins-hbase4.apache.org,41789,1689790509619 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 275609692@qtp-1844169919-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:50044 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-58081c0a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 35083 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x3d4df155-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp312826310-2262 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:37897 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8-prefix:jenkins-hbase4.apache.org,33689,1689790509470 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp312826310-2260 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-790118196_17 at /127.0.0.1:37730 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1040865871-2601 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36501 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x590c5127-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:37897 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server idle connection scanner for port 40907 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:39265 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-790118196_17 at /127.0.0.1:37548 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1768523689-2230 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/41015-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1822044341-2293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41015-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp312826310-2257-acceptor-0@3b9c342f-ServerConnector@41dce8ff{HTTP/1.1, (http/1.1)}{0.0.0.0:33863} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:37897 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/37435-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 0 on default port 35083 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1778283068_17 at /127.0.0.1:33856 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 40907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1768523689-2225 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/658776246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x19454ddf-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:36501 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-37e7dc34-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1768523689-2229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-790118196_17 at /127.0.0.1:37522 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x58d21df0-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp623902554-2316 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/658776246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@756f891f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1768523689-2226-acceptor-0@7c3c1250-ServerConnector@43c372b2{HTTP/1.1, (http/1.1)}{0.0.0.0:45673} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp623902554-2321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/41015-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-632430600-172.31.14.131-1689790508583 heartbeating to localhost/127.0.0.1:37897 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data3/current/BP-632430600-172.31.14.131-1689790508583 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:39265 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@727a1309 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1454533833_17 at /127.0.0.1:37534 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:37897 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x19454ddf-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 37897 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@4581ef8a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1822044341-2291 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp757316490-2327 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/658776246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41015-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-790118196_17 at /127.0.0.1:60858 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1822044341-2286 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/658776246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 35083 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase4:33689Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-286785144_17 at /127.0.0.1:37678 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp312826310-2261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1608132301@qtp-1844169919-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34067 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8-prefix:jenkins-hbase4.apache.org,41789,1689790509619.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:36501Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8-prefix:jenkins-hbase4.apache.org,36501,1689790509773 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41015.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: BP-632430600-172.31.14.131-1689790508583 heartbeating to localhost/127.0.0.1:37897 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:39265 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp623902554-2317-acceptor-0@5da9cbc1-ServerConnector@2247058a{HTTP/1.1, (http/1.1)}{0.0.0.0:35187} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27a65c76-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x3d4df155-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Session-HouseKeeper-29da6b3c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x58d21df0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp312826310-2259 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData-prefix:jenkins-hbase4.apache.org,39305,1689790509306 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41015.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:39265 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-790118196_17 at /127.0.0.1:60872 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp623902554-2322 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1040865871-2603 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x590c5127-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 41015 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x27a65c76-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x590c5127 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1502525435.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x58d21df0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1502525435.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(411235823) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: hconnection-0x27a65c76-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1697833032@qtp-1005542903-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36677 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36501 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1040865871-2602 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp757316490-2330 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/658776246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@6cbf8af java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x7995a4c1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x7995a4c1-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 4 on default port 40907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:1;jenkins-hbase4:41789-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33827,1689790503765 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@7f5530fa java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41015.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: hconnection-0x27a65c76-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:41789 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Session-HouseKeeper-45417e41-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 41015 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x5e983e41 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1502525435.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 3 on default port 37897 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1768523689-2231 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689790510084 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: qtp312826310-2258 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41015-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp757316490-2333 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2578a542 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55505@0x49b7c08e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1502525435.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@50cef9dd sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 40907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: BP-632430600-172.31.14.131-1689790508583 heartbeating to localhost/127.0.0.1:37897 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 35083 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 41015 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27a65c76-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1768523689-2227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data6/current/BP-632430600-172.31.14.131-1689790508583 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1778283068_17 at /127.0.0.1:41694 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1768523689-2228 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp623902554-2318 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41789Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:38273Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp757316490-2328 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/658776246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 40907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data2/current/BP-632430600-172.31.14.131-1689790508583 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 41015 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp623902554-2320 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-790118196_17 at /127.0.0.1:41664 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36501 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp623902554-2319 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x341978eb-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27a65c76-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x5e983e41-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 1 on default port 37897 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data4/current/BP-632430600-172.31.14.131-1689790508583 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:37897 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 40907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-2715cf81-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp757316490-2331-acceptor-0@76c92138-ServerConnector@23e4505e{HTTP/1.1, (http/1.1)}{0.0.0.0:44967} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1040865871-2598 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@717c3e8b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:37897 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 41015 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: 1171248988@qtp-328433220-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x0d3af6c2-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/41015 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:38273 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27a65c76-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp757316490-2334 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data5/current/BP-632430600-172.31.14.131-1689790508583 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:37897 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 291439604@qtp-1981377345-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-286785144_17 at /127.0.0.1:37592 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:33689-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1040865871-2596 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/658776246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ProcessThread(sid:0 cport:50044): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: qtp757316490-2332 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 35083 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/41015-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/41015-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 2 on default port 41015 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1768523689-2232 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x0d3af6c2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1502525435.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55505@0x49b7c08e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36501 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 35083 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55505@0x49b7c08e-SendThread(127.0.0.1:55505) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x3d4df155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1502525435.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 2125030820@qtp-328433220-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46759 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp312826310-2263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-286785144_17 at /127.0.0.1:60860 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:39265 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1040865871-2599 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41015.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1822044341-2289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@26179e00[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41015-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@63ec0b9f java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:38273-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36501 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x0d3af6c2-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 4 on default port 37897 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x27a65c76-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1454533833_17 at /127.0.0.1:37664 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41015-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:39265 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36501 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1822044341-2287-acceptor-0@747ede51-ServerConnector@6a7c0e78{HTTP/1.1, (http/1.1)}{0.0.0.0:36633} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:39265 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x341978eb-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@1d690df7[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1565186766@qtp-1005542903-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/41015-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1040865871-2600 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data1/current/BP-632430600-172.31.14.131-1689790508583 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x5e983e41-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-632430600-172.31.14.131-1689790508583:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50044@0x341978eb sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1502525435.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-790118196_17 at /127.0.0.1:37674 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/41015-SendThread(127.0.0.1:50044) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/41015-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1822044341-2290 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp312826310-2256 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/658776246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5afe4fdc sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1040865871-2597-acceptor-0@1c7c00df-ServerConnector@50f9ce59{HTTP/1.1, (http/1.1)}{0.0.0.0:43257} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1822044341-2292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77fc54a2-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1778283068_17 at /127.0.0.1:39780 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 37897 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1822044341-2288 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-790118196_17 at /127.0.0.1:37692 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37435-SendThread(127.0.0.1:55505) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RS:2;jenkins-hbase4:36501-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:39265 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689790510084 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36501 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp757316490-2329 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/658776246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41789 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5c8bb404 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@549be5ba java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1454533833_17 at /127.0.0.1:60850 [Receiving block BP-632430600-172.31.14.131-1689790508583:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1530815379) connection to localhost/127.0.0.1:39265 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36501 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 228826702@qtp-1981377345-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46663 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39305,1689790509306 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36501 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: M:0;jenkins-hbase4:39305 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=834 (was 795) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=468 (was 474), ProcessCount=173 (was 173), AvailableMemoryMB=2493 (was 2604) 2023-07-19 18:15:11,460 WARN [Listener at localhost/41015] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-19 18:15:11,480 INFO [Listener at localhost/41015] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=567, OpenFileDescriptor=834, MaxFileDescriptor=60000, SystemLoadAverage=468, ProcessCount=173, AvailableMemoryMB=2492 2023-07-19 18:15:11,480 WARN [Listener at localhost/41015] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-19 18:15:11,481 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-19 18:15:11,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:11,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:11,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:11,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:11,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:11,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:11,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:11,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:11,487 INFO [RS:3;jenkins-hbase4:38273] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38273%2C1689790511164, suffix=, logDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,38273,1689790511164, archiveDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/oldWALs, maxLogs=32 2023-07-19 18:15:11,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:11,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:11,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:11,494 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:11,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:11,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:11,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:11,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:11,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:11,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:11,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:11,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39305] to rsgroup master 2023-07-19 18:15:11,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:11,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:39860 deadline: 1689791711503, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. 2023-07-19 18:15:11,505 WARN [Listener at localhost/41015] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:11,507 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:11,514 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK] 2023-07-19 18:15:11,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:11,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:11,515 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK] 2023-07-19 18:15:11,516 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK] 2023-07-19 18:15:11,518 INFO [RS:3;jenkins-hbase4:38273] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,38273,1689790511164/jenkins-hbase4.apache.org%2C38273%2C1689790511164.1689790511487 2023-07-19 18:15:11,518 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33689, jenkins-hbase4.apache.org:36501, jenkins-hbase4.apache.org:38273, jenkins-hbase4.apache.org:41789], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:11,518 DEBUG [RS:3;jenkins-hbase4:38273] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37729,DS-62660876-5554-4c01-8f44-d352bdaabc93,DISK], DatanodeInfoWithStorage[127.0.0.1:32927,DS-b82a440c-7798-431e-9221-2eece5a20bac,DISK], DatanodeInfoWithStorage[127.0.0.1:45633,DS-fc4c164f-7a41-4289-b1f1-c47b4294e2b4,DISK]] 2023-07-19 18:15:11,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:11,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:11,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:11,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-19 18:15:11,523 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:15:11,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-19 18:15:11,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 18:15:11,525 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:11,526 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:11,526 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:11,529 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 18:15:11,531 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/default/t1/265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:11,531 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/default/t1/265045e2ee141be972a1cfdc3b28ece0 empty. 2023-07-19 18:15:11,532 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/default/t1/265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:11,532 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-19 18:15:11,551 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-19 18:15:11,552 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 265045e2ee141be972a1cfdc3b28ece0, NAME => 't1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp 2023-07-19 18:15:11,572 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:11,572 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 265045e2ee141be972a1cfdc3b28ece0, disabling compactions & flushes 2023-07-19 18:15:11,572 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. 2023-07-19 18:15:11,572 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. 2023-07-19 18:15:11,572 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. after waiting 0 ms 2023-07-19 18:15:11,572 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. 2023-07-19 18:15:11,572 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. 2023-07-19 18:15:11,572 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 265045e2ee141be972a1cfdc3b28ece0: 2023-07-19 18:15:11,575 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 18:15:11,576 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689790511575"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790511575"}]},"ts":"1689790511575"} 2023-07-19 18:15:11,577 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 18:15:11,578 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 18:15:11,578 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790511578"}]},"ts":"1689790511578"} 2023-07-19 18:15:11,579 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-19 18:15:11,583 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 18:15:11,583 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 18:15:11,583 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 18:15:11,583 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 18:15:11,583 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-19 18:15:11,583 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 18:15:11,583 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=265045e2ee141be972a1cfdc3b28ece0, ASSIGN}] 2023-07-19 18:15:11,584 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=265045e2ee141be972a1cfdc3b28ece0, ASSIGN 2023-07-19 18:15:11,585 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=265045e2ee141be972a1cfdc3b28ece0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41789,1689790509619; forceNewPlan=false, retain=false 2023-07-19 18:15:11,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 18:15:11,735 INFO [jenkins-hbase4:39305] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 18:15:11,737 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=265045e2ee141be972a1cfdc3b28ece0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:11,737 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689790511737"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790511737"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790511737"}]},"ts":"1689790511737"} 2023-07-19 18:15:11,739 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 265045e2ee141be972a1cfdc3b28ece0, server=jenkins-hbase4.apache.org,41789,1689790509619}] 2023-07-19 18:15:11,747 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 18:15:11,747 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-19 18:15:11,747 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 18:15:11,747 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-19 18:15:11,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 18:15:11,900 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. 2023-07-19 18:15:11,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 265045e2ee141be972a1cfdc3b28ece0, NAME => 't1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0.', STARTKEY => '', ENDKEY => ''} 2023-07-19 18:15:11,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:11,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 18:15:11,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:11,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:11,903 INFO [StoreOpener-265045e2ee141be972a1cfdc3b28ece0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:11,904 DEBUG [StoreOpener-265045e2ee141be972a1cfdc3b28ece0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/default/t1/265045e2ee141be972a1cfdc3b28ece0/cf1 2023-07-19 18:15:11,904 DEBUG [StoreOpener-265045e2ee141be972a1cfdc3b28ece0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/default/t1/265045e2ee141be972a1cfdc3b28ece0/cf1 2023-07-19 18:15:11,905 INFO [StoreOpener-265045e2ee141be972a1cfdc3b28ece0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 265045e2ee141be972a1cfdc3b28ece0 columnFamilyName cf1 2023-07-19 18:15:11,906 INFO [StoreOpener-265045e2ee141be972a1cfdc3b28ece0-1] regionserver.HStore(310): Store=265045e2ee141be972a1cfdc3b28ece0/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 18:15:11,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/default/t1/265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:11,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/default/t1/265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:11,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:11,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/default/t1/265045e2ee141be972a1cfdc3b28ece0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 18:15:11,912 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 265045e2ee141be972a1cfdc3b28ece0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11510886080, jitterRate=0.0720348060131073}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 18:15:11,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 265045e2ee141be972a1cfdc3b28ece0: 2023-07-19 18:15:11,913 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0., pid=14, masterSystemTime=1689790511891 2023-07-19 18:15:11,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. 2023-07-19 18:15:11,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. 2023-07-19 18:15:11,916 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=265045e2ee141be972a1cfdc3b28ece0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:11,916 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689790511916"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689790511916"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689790511916"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689790511916"}]},"ts":"1689790511916"} 2023-07-19 18:15:11,921 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-19 18:15:11,921 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 265045e2ee141be972a1cfdc3b28ece0, server=jenkins-hbase4.apache.org,41789,1689790509619 in 179 msec 2023-07-19 18:15:11,931 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-19 18:15:11,931 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=265045e2ee141be972a1cfdc3b28ece0, ASSIGN in 338 msec 2023-07-19 18:15:11,932 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 18:15:11,932 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790511932"}]},"ts":"1689790511932"} 2023-07-19 18:15:11,934 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-19 18:15:11,937 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 18:15:11,939 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 416 msec 2023-07-19 18:15:12,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 18:15:12,128 INFO [Listener at localhost/41015] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-19 18:15:12,128 DEBUG [Listener at localhost/41015] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-19 18:15:12,128 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:12,130 INFO [Listener at localhost/41015] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-19 18:15:12,130 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:12,130 INFO [Listener at localhost/41015] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-19 18:15:12,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 18:15:12,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-19 18:15:12,134 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 18:15:12,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-19 18:15:12,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:39860 deadline: 1689790572131, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-19 18:15:12,139 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:12,141 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=8 msec 2023-07-19 18:15:12,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:12,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:12,242 INFO [Listener at localhost/41015] client.HBaseAdmin$15(890): Started disable of t1 2023-07-19 18:15:12,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-19 18:15:12,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-19 18:15:12,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 18:15:12,246 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790512246"}]},"ts":"1689790512246"} 2023-07-19 18:15:12,247 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-19 18:15:12,250 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-19 18:15:12,251 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=265045e2ee141be972a1cfdc3b28ece0, UNASSIGN}] 2023-07-19 18:15:12,252 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=265045e2ee141be972a1cfdc3b28ece0, UNASSIGN 2023-07-19 18:15:12,252 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=265045e2ee141be972a1cfdc3b28ece0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:12,253 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689790512252"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689790512252"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689790512252"}]},"ts":"1689790512252"} 2023-07-19 18:15:12,254 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 265045e2ee141be972a1cfdc3b28ece0, server=jenkins-hbase4.apache.org,41789,1689790509619}] 2023-07-19 18:15:12,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 18:15:12,406 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:12,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 265045e2ee141be972a1cfdc3b28ece0, disabling compactions & flushes 2023-07-19 18:15:12,406 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. 2023-07-19 18:15:12,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. 2023-07-19 18:15:12,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. after waiting 0 ms 2023-07-19 18:15:12,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. 2023-07-19 18:15:12,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/default/t1/265045e2ee141be972a1cfdc3b28ece0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 18:15:12,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0. 2023-07-19 18:15:12,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 265045e2ee141be972a1cfdc3b28ece0: 2023-07-19 18:15:12,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:12,413 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=265045e2ee141be972a1cfdc3b28ece0, regionState=CLOSED 2023-07-19 18:15:12,413 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689790512412"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689790512412"}]},"ts":"1689790512412"} 2023-07-19 18:15:12,415 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-19 18:15:12,415 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 265045e2ee141be972a1cfdc3b28ece0, server=jenkins-hbase4.apache.org,41789,1689790509619 in 160 msec 2023-07-19 18:15:12,416 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-19 18:15:12,416 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=265045e2ee141be972a1cfdc3b28ece0, UNASSIGN in 164 msec 2023-07-19 18:15:12,417 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689790512417"}]},"ts":"1689790512417"} 2023-07-19 18:15:12,418 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-19 18:15:12,420 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-19 18:15:12,423 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 179 msec 2023-07-19 18:15:12,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 18:15:12,549 INFO [Listener at localhost/41015] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-19 18:15:12,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-19 18:15:12,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-19 18:15:12,553 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-19 18:15:12,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-19 18:15:12,553 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-19 18:15:12,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:12,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:12,557 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/default/t1/265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:12,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 18:15:12,559 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/default/t1/265045e2ee141be972a1cfdc3b28ece0/cf1, FileablePath, hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/default/t1/265045e2ee141be972a1cfdc3b28ece0/recovered.edits] 2023-07-19 18:15:12,564 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/default/t1/265045e2ee141be972a1cfdc3b28ece0/recovered.edits/4.seqid to hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/archive/data/default/t1/265045e2ee141be972a1cfdc3b28ece0/recovered.edits/4.seqid 2023-07-19 18:15:12,564 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/.tmp/data/default/t1/265045e2ee141be972a1cfdc3b28ece0 2023-07-19 18:15:12,564 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-19 18:15:12,567 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-19 18:15:12,568 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-19 18:15:12,570 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-19 18:15:12,571 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-19 18:15:12,571 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-19 18:15:12,571 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689790512571"}]},"ts":"9223372036854775807"} 2023-07-19 18:15:12,573 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 18:15:12,573 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 265045e2ee141be972a1cfdc3b28ece0, NAME => 't1,,1689790511520.265045e2ee141be972a1cfdc3b28ece0.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 18:15:12,573 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-19 18:15:12,573 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689790512573"}]},"ts":"9223372036854775807"} 2023-07-19 18:15:12,574 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-19 18:15:12,576 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-19 18:15:12,577 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 26 msec 2023-07-19 18:15:12,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 18:15:12,659 INFO [Listener at localhost/41015] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-19 18:15:12,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:12,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:12,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:12,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:12,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:12,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:12,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:12,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:12,677 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:12,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:12,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:12,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:12,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:12,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39305] to rsgroup master 2023-07-19 18:15:12,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:12,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:39860 deadline: 1689791712686, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. 2023-07-19 18:15:12,687 WARN [Listener at localhost/41015] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:12,691 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:12,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,692 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33689, jenkins-hbase4.apache.org:36501, jenkins-hbase4.apache.org:38273, jenkins-hbase4.apache.org:41789], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:12,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:12,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:12,710 INFO [Listener at localhost/41015] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=575 (was 567) - Thread LEAK? -, OpenFileDescriptor=840 (was 834) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=468 (was 468), ProcessCount=171 (was 173), AvailableMemoryMB=4505 (was 2492) - AvailableMemoryMB LEAK? - 2023-07-19 18:15:12,710 WARN [Listener at localhost/41015] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-19 18:15:12,728 INFO [Listener at localhost/41015] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=575, OpenFileDescriptor=840, MaxFileDescriptor=60000, SystemLoadAverage=468, ProcessCount=171, AvailableMemoryMB=4505 2023-07-19 18:15:12,728 WARN [Listener at localhost/41015] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-19 18:15:12,728 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-19 18:15:12,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:12,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:12,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:12,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:12,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:12,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:12,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:12,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:12,740 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:12,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:12,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:12,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:12,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:12,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39305] to rsgroup master 2023-07-19 18:15:12,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:12,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39860 deadline: 1689791712749, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. 2023-07-19 18:15:12,750 WARN [Listener at localhost/41015] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:12,751 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:12,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,752 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33689, jenkins-hbase4.apache.org:36501, jenkins-hbase4.apache.org:38273, jenkins-hbase4.apache.org:41789], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:12,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:12,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:12,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-19 18:15:12,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:15:12,755 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-19 18:15:12,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-19 18:15:12,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 18:15:12,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:12,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:12,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:12,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:12,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:12,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:12,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:12,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:12,772 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:12,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:12,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:12,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:12,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:12,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39305] to rsgroup master 2023-07-19 18:15:12,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:12,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39860 deadline: 1689791712784, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. 2023-07-19 18:15:12,785 WARN [Listener at localhost/41015] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:12,786 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:12,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,788 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33689, jenkins-hbase4.apache.org:36501, jenkins-hbase4.apache.org:38273, jenkins-hbase4.apache.org:41789], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:12,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:12,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:12,806 INFO [Listener at localhost/41015] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=577 (was 575) - Thread LEAK? -, OpenFileDescriptor=840 (was 840), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=468 (was 468), ProcessCount=171 (was 171), AvailableMemoryMB=4505 (was 4505) 2023-07-19 18:15:12,806 WARN [Listener at localhost/41015] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-19 18:15:12,827 INFO [Listener at localhost/41015] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=577, OpenFileDescriptor=840, MaxFileDescriptor=60000, SystemLoadAverage=468, ProcessCount=171, AvailableMemoryMB=4505 2023-07-19 18:15:12,827 WARN [Listener at localhost/41015] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-19 18:15:12,827 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-19 18:15:12,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:12,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:12,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:12,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:12,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:12,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:12,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:12,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:12,841 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:12,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:12,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:12,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:12,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:12,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39305] to rsgroup master 2023-07-19 18:15:12,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:12,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39860 deadline: 1689791712852, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. 2023-07-19 18:15:12,853 WARN [Listener at localhost/41015] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:12,855 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:12,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,856 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33689, jenkins-hbase4.apache.org:36501, jenkins-hbase4.apache.org:38273, jenkins-hbase4.apache.org:41789], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:12,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:12,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:12,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:12,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:12,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:12,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:12,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:12,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:12,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:12,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:12,875 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:12,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:12,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:12,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:12,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:12,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39305] to rsgroup master 2023-07-19 18:15:12,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:12,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39860 deadline: 1689791712885, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. 2023-07-19 18:15:12,886 WARN [Listener at localhost/41015] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:12,888 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:12,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,889 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33689, jenkins-hbase4.apache.org:36501, jenkins-hbase4.apache.org:38273, jenkins-hbase4.apache.org:41789], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:12,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:12,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:12,910 INFO [Listener at localhost/41015] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=578 (was 577) - Thread LEAK? -, OpenFileDescriptor=840 (was 840), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=468 (was 468), ProcessCount=171 (was 171), AvailableMemoryMB=4505 (was 4505) 2023-07-19 18:15:12,910 WARN [Listener at localhost/41015] hbase.ResourceChecker(130): Thread=578 is superior to 500 2023-07-19 18:15:12,934 INFO [Listener at localhost/41015] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=578, OpenFileDescriptor=840, MaxFileDescriptor=60000, SystemLoadAverage=468, ProcessCount=171, AvailableMemoryMB=4504 2023-07-19 18:15:12,934 WARN [Listener at localhost/41015] hbase.ResourceChecker(130): Thread=578 is superior to 500 2023-07-19 18:15:12,934 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-19 18:15:12,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:12,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:12,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:12,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:12,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:12,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:12,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:12,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:12,949 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:12,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:12,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:12,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:12,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:12,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39305] to rsgroup master 2023-07-19 18:15:12,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:12,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39860 deadline: 1689791712960, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. 2023-07-19 18:15:12,960 WARN [Listener at localhost/41015] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:12,962 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:12,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,963 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33689, jenkins-hbase4.apache.org:36501, jenkins-hbase4.apache.org:38273, jenkins-hbase4.apache.org:41789], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:12,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:12,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:12,964 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-19 18:15:12,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-19 18:15:12,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-19 18:15:12,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:12,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:12,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 18:15:12,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:12,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:12,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:12,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-19 18:15:12,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-19 18:15:12,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 18:15:12,984 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:15:12,986 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-19 18:15:13,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 18:15:13,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-19 18:15:13,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:13,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:39860 deadline: 1689791713081, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-19 18:15:13,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-19 18:15:13,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-19 18:15:13,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 18:15:13,103 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-19 18:15:13,104 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-19 18:15:13,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 18:15:13,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-19 18:15:13,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-19 18:15:13,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:13,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-19 18:15:13,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:13,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 18:15:13,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:13,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:13,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:13,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-19 18:15:13,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 18:15:13,221 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 18:15:13,223 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 18:15:13,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-19 18:15:13,224 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 18:15:13,225 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-19 18:15:13,225 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 18:15:13,226 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 18:15:13,228 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 18:15:13,228 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-19 18:15:13,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-19 18:15:13,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-19 18:15:13,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-19 18:15:13,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:13,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:13,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-19 18:15:13,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:13,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:13,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:13,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:13,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:39860 deadline: 1689790573335, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-19 18:15:13,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:13,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:13,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:13,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:13,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:13,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:13,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:13,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-19 18:15:13,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:13,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:13,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 18:15:13,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:13,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 18:15:13,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 18:15:13,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 18:15:13,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 18:15:13,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 18:15:13,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 18:15:13,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:13,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 18:15:13,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 18:15:13,360 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 18:15:13,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 18:15:13,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 18:15:13,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 18:15:13,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 18:15:13,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 18:15:13,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:13,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:13,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39305] to rsgroup master 2023-07-19 18:15:13,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 18:15:13,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39860 deadline: 1689791713369, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. 2023-07-19 18:15:13,369 WARN [Listener at localhost/41015] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39305 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 18:15:13,371 INFO [Listener at localhost/41015] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 18:15:13,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 18:15:13,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 18:15:13,372 INFO [Listener at localhost/41015] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33689, jenkins-hbase4.apache.org:36501, jenkins-hbase4.apache.org:38273, jenkins-hbase4.apache.org:41789], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 18:15:13,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 18:15:13,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 18:15:13,390 INFO [Listener at localhost/41015] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=578 (was 578), OpenFileDescriptor=835 (was 840), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 468), ProcessCount=171 (was 171), AvailableMemoryMB=4504 (was 4504) 2023-07-19 18:15:13,390 WARN [Listener at localhost/41015] hbase.ResourceChecker(130): Thread=578 is superior to 500 2023-07-19 18:15:13,390 INFO [Listener at localhost/41015] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-19 18:15:13,390 INFO [Listener at localhost/41015] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-19 18:15:13,391 DEBUG [Listener at localhost/41015] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x590c5127 to 127.0.0.1:50044 2023-07-19 18:15:13,391 DEBUG [Listener at localhost/41015] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:13,391 DEBUG [Listener at localhost/41015] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-19 18:15:13,391 DEBUG [Listener at localhost/41015] util.JVMClusterUtil(257): Found active master hash=1699421006, stopped=false 2023-07-19 18:15:13,391 DEBUG [Listener at localhost/41015] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 18:15:13,391 DEBUG [Listener at localhost/41015] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 18:15:13,391 INFO [Listener at localhost/41015] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,39305,1689790509306 2023-07-19 18:15:13,394 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:13,394 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:13,394 INFO [Listener at localhost/41015] procedure2.ProcedureExecutor(629): Stopping 2023-07-19 18:15:13,394 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:13,395 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:13,394 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:13,395 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:13,395 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:13,395 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 18:15:13,395 DEBUG [Listener at localhost/41015] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7995a4c1 to 127.0.0.1:50044 2023-07-19 18:15:13,396 DEBUG [Listener at localhost/41015] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:13,396 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:13,396 INFO [Listener at localhost/41015] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33689,1689790509470' ***** 2023-07-19 18:15:13,396 INFO [Listener at localhost/41015] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 18:15:13,396 INFO [Listener at localhost/41015] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41789,1689790509619' ***** 2023-07-19 18:15:13,396 INFO [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:13,396 INFO [Listener at localhost/41015] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 18:15:13,396 INFO [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:13,396 INFO [Listener at localhost/41015] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36501,1689790509773' ***** 2023-07-19 18:15:13,398 INFO [Listener at localhost/41015] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 18:15:13,398 INFO [Listener at localhost/41015] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38273,1689790511164' ***** 2023-07-19 18:15:13,399 INFO [Listener at localhost/41015] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 18:15:13,401 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 18:15:13,401 INFO [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:13,398 INFO [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:13,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:13,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 18:15:13,406 INFO [RS:0;jenkins-hbase4:33689] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5b487ac{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:13,400 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:13,406 INFO [RS:1;jenkins-hbase4:41789] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3c8603ce{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:13,406 INFO [RS:2;jenkins-hbase4:36501] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@192c0497{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:13,406 INFO [RS:0;jenkins-hbase4:33689] server.AbstractConnector(383): Stopped ServerConnector@41dce8ff{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:13,406 INFO [RS:3;jenkins-hbase4:38273] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@86fd3aa{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 18:15:13,406 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 18:15:13,407 INFO [RS:1;jenkins-hbase4:41789] server.AbstractConnector(383): Stopped ServerConnector@6a7c0e78{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:13,407 INFO [RS:0;jenkins-hbase4:33689] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:13,407 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 18:15:13,407 INFO [RS:3;jenkins-hbase4:38273] server.AbstractConnector(383): Stopped ServerConnector@50f9ce59{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:13,407 INFO [RS:1;jenkins-hbase4:41789] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:13,408 INFO [RS:3;jenkins-hbase4:38273] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:13,408 INFO [RS:0;jenkins-hbase4:33689] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2652e034{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:13,408 INFO [RS:2;jenkins-hbase4:36501] server.AbstractConnector(383): Stopped ServerConnector@2247058a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:13,410 INFO [RS:0;jenkins-hbase4:33689] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3f5ab527{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:13,410 INFO [RS:2;jenkins-hbase4:36501] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:13,409 INFO [RS:3;jenkins-hbase4:38273] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@74c9cb11{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:13,409 INFO [RS:1;jenkins-hbase4:41789] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@743d266c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:13,411 INFO [RS:2;jenkins-hbase4:36501] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2648f553{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:13,413 INFO [RS:1;jenkins-hbase4:41789] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@737b34c2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:13,413 INFO [RS:0;jenkins-hbase4:33689] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 18:15:13,413 INFO [RS:2;jenkins-hbase4:36501] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@df82292{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:13,412 INFO [RS:3;jenkins-hbase4:38273] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@29f1e83{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:13,413 INFO [RS:0;jenkins-hbase4:33689] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 18:15:13,414 INFO [RS:0;jenkins-hbase4:33689] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 18:15:13,414 INFO [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(3305): Received CLOSE for a8ca1f3882e62958c7ca91ce3cbb2d8e 2023-07-19 18:15:13,414 INFO [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:13,414 DEBUG [RS:0;jenkins-hbase4:33689] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x341978eb to 127.0.0.1:50044 2023-07-19 18:15:13,415 DEBUG [RS:0;jenkins-hbase4:33689] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:13,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a8ca1f3882e62958c7ca91ce3cbb2d8e, disabling compactions & flushes 2023-07-19 18:15:13,415 INFO [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 18:15:13,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:13,415 INFO [RS:1;jenkins-hbase4:41789] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 18:15:13,415 DEBUG [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1478): Online Regions={a8ca1f3882e62958c7ca91ce3cbb2d8e=hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e.} 2023-07-19 18:15:13,415 INFO [RS:1;jenkins-hbase4:41789] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 18:15:13,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:13,415 INFO [RS:1;jenkins-hbase4:41789] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 18:15:13,415 INFO [RS:3;jenkins-hbase4:38273] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 18:15:13,415 INFO [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(3305): Received CLOSE for 2968131a1076392f3ff887a5705b7862 2023-07-19 18:15:13,415 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 18:15:13,415 DEBUG [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1504): Waiting on a8ca1f3882e62958c7ca91ce3cbb2d8e 2023-07-19 18:15:13,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2968131a1076392f3ff887a5705b7862, disabling compactions & flushes 2023-07-19 18:15:13,415 INFO [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:13,415 INFO [RS:3;jenkins-hbase4:38273] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 18:15:13,416 INFO [RS:3;jenkins-hbase4:38273] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 18:15:13,416 INFO [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:13,416 DEBUG [RS:3;jenkins-hbase4:38273] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0d3af6c2 to 127.0.0.1:50044 2023-07-19 18:15:13,415 INFO [RS:2;jenkins-hbase4:36501] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 18:15:13,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. after waiting 0 ms 2023-07-19 18:15:13,416 INFO [RS:2;jenkins-hbase4:36501] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 18:15:13,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:13,416 DEBUG [RS:3;jenkins-hbase4:38273] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:13,415 DEBUG [RS:1;jenkins-hbase4:41789] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5e983e41 to 127.0.0.1:50044 2023-07-19 18:15:13,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:13,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:13,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. after waiting 0 ms 2023-07-19 18:15:13,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:13,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 2968131a1076392f3ff887a5705b7862 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-19 18:15:13,416 DEBUG [RS:1;jenkins-hbase4:41789] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:13,416 INFO [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38273,1689790511164; all regions closed. 2023-07-19 18:15:13,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a8ca1f3882e62958c7ca91ce3cbb2d8e 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-19 18:15:13,416 INFO [RS:2;jenkins-hbase4:36501] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 18:15:13,417 INFO [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:13,417 DEBUG [RS:2;jenkins-hbase4:36501] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3d4df155 to 127.0.0.1:50044 2023-07-19 18:15:13,417 DEBUG [RS:2;jenkins-hbase4:36501] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:13,417 INFO [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36501,1689790509773; all regions closed. 2023-07-19 18:15:13,417 INFO [RS:1;jenkins-hbase4:41789] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 18:15:13,417 INFO [RS:1;jenkins-hbase4:41789] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 18:15:13,417 INFO [RS:1;jenkins-hbase4:41789] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 18:15:13,417 INFO [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-19 18:15:13,418 INFO [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-19 18:15:13,418 DEBUG [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 2968131a1076392f3ff887a5705b7862=hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862.} 2023-07-19 18:15:13,418 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 18:15:13,418 DEBUG [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1504): Waiting on 1588230740, 2968131a1076392f3ff887a5705b7862 2023-07-19 18:15:13,418 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 18:15:13,418 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 18:15:13,418 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 18:15:13,418 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 18:15:13,418 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-19 18:15:13,420 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:13,420 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:13,422 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,38273,1689790511164/jenkins-hbase4.apache.org%2C38273%2C1689790511164.1689790511487 not finished, retry = 0 2023-07-19 18:15:13,428 DEBUG [RS:2;jenkins-hbase4:36501] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/oldWALs 2023-07-19 18:15:13,428 INFO [RS:2;jenkins-hbase4:36501] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36501%2C1689790509773:(num 1689790510334) 2023-07-19 18:15:13,428 DEBUG [RS:2;jenkins-hbase4:36501] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:13,428 INFO [RS:2;jenkins-hbase4:36501] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:13,431 INFO [RS:2;jenkins-hbase4:36501] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:13,431 INFO [RS:2;jenkins-hbase4:36501] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 18:15:13,432 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:13,432 INFO [RS:2;jenkins-hbase4:36501] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 18:15:13,432 INFO [RS:2;jenkins-hbase4:36501] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 18:15:13,433 INFO [RS:2;jenkins-hbase4:36501] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36501 2023-07-19 18:15:13,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e/.tmp/m/662cd6cd385d43ada2e9bd0072ec021c 2023-07-19 18:15:13,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/namespace/2968131a1076392f3ff887a5705b7862/.tmp/info/9f9cda3ec388488189e7b2f8b1183299 2023-07-19 18:15:13,460 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:13,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 662cd6cd385d43ada2e9bd0072ec021c 2023-07-19 18:15:13,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e/.tmp/m/662cd6cd385d43ada2e9bd0072ec021c as hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e/m/662cd6cd385d43ada2e9bd0072ec021c 2023-07-19 18:15:13,466 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/.tmp/info/117028e17c0b4590acd6f13d573eed02 2023-07-19 18:15:13,467 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9f9cda3ec388488189e7b2f8b1183299 2023-07-19 18:15:13,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/namespace/2968131a1076392f3ff887a5705b7862/.tmp/info/9f9cda3ec388488189e7b2f8b1183299 as hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/namespace/2968131a1076392f3ff887a5705b7862/info/9f9cda3ec388488189e7b2f8b1183299 2023-07-19 18:15:13,473 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 117028e17c0b4590acd6f13d573eed02 2023-07-19 18:15:13,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 662cd6cd385d43ada2e9bd0072ec021c 2023-07-19 18:15:13,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e/m/662cd6cd385d43ada2e9bd0072ec021c, entries=12, sequenceid=29, filesize=5.4 K 2023-07-19 18:15:13,475 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for a8ca1f3882e62958c7ca91ce3cbb2d8e in 59ms, sequenceid=29, compaction requested=false 2023-07-19 18:15:13,480 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9f9cda3ec388488189e7b2f8b1183299 2023-07-19 18:15:13,481 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/namespace/2968131a1076392f3ff887a5705b7862/info/9f9cda3ec388488189e7b2f8b1183299, entries=3, sequenceid=9, filesize=5.0 K 2023-07-19 18:15:13,481 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 2968131a1076392f3ff887a5705b7862 in 65ms, sequenceid=9, compaction requested=false 2023-07-19 18:15:13,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/rsgroup/a8ca1f3882e62958c7ca91ce3cbb2d8e/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-19 18:15:13,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 18:15:13,491 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:13,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a8ca1f3882e62958c7ca91ce3cbb2d8e: 2023-07-19 18:15:13,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689790510701.a8ca1f3882e62958c7ca91ce3cbb2d8e. 2023-07-19 18:15:13,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/namespace/2968131a1076392f3ff887a5705b7862/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-19 18:15:13,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:13,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2968131a1076392f3ff887a5705b7862: 2023-07-19 18:15:13,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689790510670.2968131a1076392f3ff887a5705b7862. 2023-07-19 18:15:13,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/.tmp/rep_barrier/3173e616590e4879a6c63d700af6e8b4 2023-07-19 18:15:13,507 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3173e616590e4879a6c63d700af6e8b4 2023-07-19 18:15:13,518 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/.tmp/table/e889ecd1f68945fd886d22fe69e30a48 2023-07-19 18:15:13,521 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:13,521 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:13,521 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:13,521 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:13,521 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:13,521 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:13,521 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36501,1689790509773 2023-07-19 18:15:13,521 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:13,521 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:13,524 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e889ecd1f68945fd886d22fe69e30a48 2023-07-19 18:15:13,524 DEBUG [RS:3;jenkins-hbase4:38273] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/oldWALs 2023-07-19 18:15:13,524 INFO [RS:3;jenkins-hbase4:38273] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38273%2C1689790511164:(num 1689790511487) 2023-07-19 18:15:13,524 DEBUG [RS:3;jenkins-hbase4:38273] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:13,524 INFO [RS:3;jenkins-hbase4:38273] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:13,525 INFO [RS:3;jenkins-hbase4:38273] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:13,525 INFO [RS:3;jenkins-hbase4:38273] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 18:15:13,525 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:13,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/.tmp/info/117028e17c0b4590acd6f13d573eed02 as hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/info/117028e17c0b4590acd6f13d573eed02 2023-07-19 18:15:13,525 INFO [RS:3;jenkins-hbase4:38273] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 18:15:13,525 INFO [RS:3;jenkins-hbase4:38273] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 18:15:13,526 INFO [RS:3;jenkins-hbase4:38273] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38273 2023-07-19 18:15:13,532 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 117028e17c0b4590acd6f13d573eed02 2023-07-19 18:15:13,532 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/info/117028e17c0b4590acd6f13d573eed02, entries=22, sequenceid=26, filesize=7.3 K 2023-07-19 18:15:13,533 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/.tmp/rep_barrier/3173e616590e4879a6c63d700af6e8b4 as hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/rep_barrier/3173e616590e4879a6c63d700af6e8b4 2023-07-19 18:15:13,538 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3173e616590e4879a6c63d700af6e8b4 2023-07-19 18:15:13,538 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/rep_barrier/3173e616590e4879a6c63d700af6e8b4, entries=1, sequenceid=26, filesize=4.9 K 2023-07-19 18:15:13,539 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/.tmp/table/e889ecd1f68945fd886d22fe69e30a48 as hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/table/e889ecd1f68945fd886d22fe69e30a48 2023-07-19 18:15:13,544 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e889ecd1f68945fd886d22fe69e30a48 2023-07-19 18:15:13,544 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/table/e889ecd1f68945fd886d22fe69e30a48, entries=6, sequenceid=26, filesize=5.1 K 2023-07-19 18:15:13,545 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 127ms, sequenceid=26, compaction requested=false 2023-07-19 18:15:13,553 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-19 18:15:13,554 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 18:15:13,555 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 18:15:13,555 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 18:15:13,555 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-19 18:15:13,615 INFO [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33689,1689790509470; all regions closed. 2023-07-19 18:15:13,618 INFO [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41789,1689790509619; all regions closed. 2023-07-19 18:15:13,622 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/WALs/jenkins-hbase4.apache.org,41789,1689790509619/jenkins-hbase4.apache.org%2C41789%2C1689790509619.meta.1689790510552.meta not finished, retry = 0 2023-07-19 18:15:13,622 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:13,622 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:13,622 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:13,622 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38273,1689790511164 2023-07-19 18:15:13,623 DEBUG [RS:0;jenkins-hbase4:33689] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/oldWALs 2023-07-19 18:15:13,623 INFO [RS:0;jenkins-hbase4:33689] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33689%2C1689790509470:(num 1689790510336) 2023-07-19 18:15:13,623 DEBUG [RS:0;jenkins-hbase4:33689] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:13,623 INFO [RS:0;jenkins-hbase4:33689] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:13,623 INFO [RS:0;jenkins-hbase4:33689] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:13,623 INFO [RS:0;jenkins-hbase4:33689] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 18:15:13,623 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:13,623 INFO [RS:0;jenkins-hbase4:33689] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 18:15:13,623 INFO [RS:0;jenkins-hbase4:33689] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 18:15:13,624 INFO [RS:0;jenkins-hbase4:33689] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33689 2023-07-19 18:15:13,625 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38273,1689790511164] 2023-07-19 18:15:13,625 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38273,1689790511164; numProcessing=1 2023-07-19 18:15:13,626 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:13,626 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33689,1689790509470 2023-07-19 18:15:13,626 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:13,627 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38273,1689790511164 already deleted, retry=false 2023-07-19 18:15:13,627 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38273,1689790511164 expired; onlineServers=3 2023-07-19 18:15:13,627 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36501,1689790509773] 2023-07-19 18:15:13,627 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36501,1689790509773; numProcessing=2 2023-07-19 18:15:13,630 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36501,1689790509773 already deleted, retry=false 2023-07-19 18:15:13,630 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36501,1689790509773 expired; onlineServers=2 2023-07-19 18:15:13,630 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33689,1689790509470] 2023-07-19 18:15:13,631 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33689,1689790509470; numProcessing=3 2023-07-19 18:15:13,632 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33689,1689790509470 already deleted, retry=false 2023-07-19 18:15:13,632 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33689,1689790509470 expired; onlineServers=1 2023-07-19 18:15:13,725 DEBUG [RS:1;jenkins-hbase4:41789] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/oldWALs 2023-07-19 18:15:13,725 INFO [RS:1;jenkins-hbase4:41789] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41789%2C1689790509619.meta:.meta(num 1689790510552) 2023-07-19 18:15:13,730 DEBUG [RS:1;jenkins-hbase4:41789] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/oldWALs 2023-07-19 18:15:13,730 INFO [RS:1;jenkins-hbase4:41789] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41789%2C1689790509619:(num 1689790510339) 2023-07-19 18:15:13,730 DEBUG [RS:1;jenkins-hbase4:41789] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:13,730 INFO [RS:1;jenkins-hbase4:41789] regionserver.LeaseManager(133): Closed leases 2023-07-19 18:15:13,730 INFO [RS:1;jenkins-hbase4:41789] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 18:15:13,730 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:13,732 INFO [RS:1;jenkins-hbase4:41789] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41789 2023-07-19 18:15:13,733 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41789,1689790509619 2023-07-19 18:15:13,733 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 18:15:13,736 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41789,1689790509619] 2023-07-19 18:15:13,736 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41789,1689790509619; numProcessing=4 2023-07-19 18:15:13,740 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41789,1689790509619 already deleted, retry=false 2023-07-19 18:15:13,740 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41789,1689790509619 expired; onlineServers=0 2023-07-19 18:15:13,740 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39305,1689790509306' ***** 2023-07-19 18:15:13,740 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-19 18:15:13,741 DEBUG [M:0;jenkins-hbase4:39305] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6174833f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 18:15:13,741 INFO [M:0;jenkins-hbase4:39305] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 18:15:13,743 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-19 18:15:13,743 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 18:15:13,744 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 18:15:13,744 INFO [M:0;jenkins-hbase4:39305] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3f1c21cd{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 18:15:13,744 INFO [M:0;jenkins-hbase4:39305] server.AbstractConnector(383): Stopped ServerConnector@43c372b2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:13,744 INFO [M:0;jenkins-hbase4:39305] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 18:15:13,745 INFO [M:0;jenkins-hbase4:39305] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3dad8b91{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 18:15:13,745 INFO [M:0;jenkins-hbase4:39305] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10f8b861{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/hadoop.log.dir/,STOPPED} 2023-07-19 18:15:13,746 INFO [M:0;jenkins-hbase4:39305] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39305,1689790509306 2023-07-19 18:15:13,746 INFO [M:0;jenkins-hbase4:39305] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39305,1689790509306; all regions closed. 2023-07-19 18:15:13,746 DEBUG [M:0;jenkins-hbase4:39305] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 18:15:13,746 INFO [M:0;jenkins-hbase4:39305] master.HMaster(1491): Stopping master jetty server 2023-07-19 18:15:13,746 INFO [M:0;jenkins-hbase4:39305] server.AbstractConnector(383): Stopped ServerConnector@23e4505e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 18:15:13,747 DEBUG [M:0;jenkins-hbase4:39305] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-19 18:15:13,747 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-19 18:15:13,747 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689790510084] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689790510084,5,FailOnTimeoutGroup] 2023-07-19 18:15:13,747 DEBUG [M:0;jenkins-hbase4:39305] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-19 18:15:13,747 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689790510084] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689790510084,5,FailOnTimeoutGroup] 2023-07-19 18:15:13,747 INFO [M:0;jenkins-hbase4:39305] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-19 18:15:13,747 INFO [M:0;jenkins-hbase4:39305] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-19 18:15:13,747 INFO [M:0;jenkins-hbase4:39305] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-19 18:15:13,747 DEBUG [M:0;jenkins-hbase4:39305] master.HMaster(1512): Stopping service threads 2023-07-19 18:15:13,747 INFO [M:0;jenkins-hbase4:39305] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-19 18:15:13,747 ERROR [M:0;jenkins-hbase4:39305] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-19 18:15:13,748 INFO [M:0;jenkins-hbase4:39305] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-19 18:15:13,748 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-19 18:15:13,748 DEBUG [M:0;jenkins-hbase4:39305] zookeeper.ZKUtil(398): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-19 18:15:13,748 WARN [M:0;jenkins-hbase4:39305] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-19 18:15:13,748 INFO [M:0;jenkins-hbase4:39305] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-19 18:15:13,748 INFO [M:0;jenkins-hbase4:39305] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-19 18:15:13,748 DEBUG [M:0;jenkins-hbase4:39305] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 18:15:13,748 INFO [M:0;jenkins-hbase4:39305] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:13,748 DEBUG [M:0;jenkins-hbase4:39305] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:13,748 DEBUG [M:0;jenkins-hbase4:39305] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 18:15:13,748 DEBUG [M:0;jenkins-hbase4:39305] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:13,748 INFO [M:0;jenkins-hbase4:39305] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.19 KB heapSize=90.66 KB 2023-07-19 18:15:13,760 INFO [M:0;jenkins-hbase4:39305] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.19 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/87f9eaf0fdc8433e97aeabb30e1e8222 2023-07-19 18:15:13,766 DEBUG [M:0;jenkins-hbase4:39305] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/87f9eaf0fdc8433e97aeabb30e1e8222 as hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/87f9eaf0fdc8433e97aeabb30e1e8222 2023-07-19 18:15:13,771 INFO [M:0;jenkins-hbase4:39305] regionserver.HStore(1080): Added hdfs://localhost:37897/user/jenkins/test-data/28fce574-05d2-84ea-7589-7c501f08e8f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/87f9eaf0fdc8433e97aeabb30e1e8222, entries=22, sequenceid=175, filesize=11.1 K 2023-07-19 18:15:13,772 INFO [M:0;jenkins-hbase4:39305] regionserver.HRegion(2948): Finished flush of dataSize ~76.19 KB/78022, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=175, compaction requested=false 2023-07-19 18:15:13,774 INFO [M:0;jenkins-hbase4:39305] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 18:15:13,774 DEBUG [M:0;jenkins-hbase4:39305] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 18:15:13,777 INFO [M:0;jenkins-hbase4:39305] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-19 18:15:13,777 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 18:15:13,779 INFO [M:0;jenkins-hbase4:39305] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39305 2023-07-19 18:15:13,780 DEBUG [M:0;jenkins-hbase4:39305] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,39305,1689790509306 already deleted, retry=false 2023-07-19 18:15:13,893 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:13,893 INFO [M:0;jenkins-hbase4:39305] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39305,1689790509306; zookeeper connection closed. 2023-07-19 18:15:13,893 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): master:39305-0x1017ecb73ea0000, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:13,994 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:13,994 INFO [RS:1;jenkins-hbase4:41789] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41789,1689790509619; zookeeper connection closed. 2023-07-19 18:15:13,994 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:41789-0x1017ecb73ea0002, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:13,994 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7b53aad4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7b53aad4 2023-07-19 18:15:14,094 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:14,094 INFO [RS:0;jenkins-hbase4:33689] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33689,1689790509470; zookeeper connection closed. 2023-07-19 18:15:14,094 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:33689-0x1017ecb73ea0001, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:14,094 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1593f376] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1593f376 2023-07-19 18:15:14,194 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:14,194 INFO [RS:3;jenkins-hbase4:38273] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38273,1689790511164; zookeeper connection closed. 2023-07-19 18:15:14,194 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x1017ecb73ea000b, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:14,195 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@272078ee] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@272078ee 2023-07-19 18:15:14,294 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:14,294 INFO [RS:2;jenkins-hbase4:36501] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36501,1689790509773; zookeeper connection closed. 2023-07-19 18:15:14,295 DEBUG [Listener at localhost/41015-EventThread] zookeeper.ZKWatcher(600): regionserver:36501-0x1017ecb73ea0003, quorum=127.0.0.1:50044, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 18:15:14,295 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@36279f00] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@36279f00 2023-07-19 18:15:14,295 INFO [Listener at localhost/41015] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-19 18:15:14,295 WARN [Listener at localhost/41015] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 18:15:14,300 INFO [Listener at localhost/41015] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:14,404 WARN [BP-632430600-172.31.14.131-1689790508583 heartbeating to localhost/127.0.0.1:37897] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 18:15:14,404 WARN [BP-632430600-172.31.14.131-1689790508583 heartbeating to localhost/127.0.0.1:37897] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-632430600-172.31.14.131-1689790508583 (Datanode Uuid 660f14bb-e20b-4d77-99a5-a1a137a0e71d) service to localhost/127.0.0.1:37897 2023-07-19 18:15:14,405 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data5/current/BP-632430600-172.31.14.131-1689790508583] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:14,405 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data6/current/BP-632430600-172.31.14.131-1689790508583] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:14,406 WARN [Listener at localhost/41015] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 18:15:14,410 INFO [Listener at localhost/41015] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:14,513 WARN [BP-632430600-172.31.14.131-1689790508583 heartbeating to localhost/127.0.0.1:37897] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 18:15:14,514 WARN [BP-632430600-172.31.14.131-1689790508583 heartbeating to localhost/127.0.0.1:37897] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-632430600-172.31.14.131-1689790508583 (Datanode Uuid 3c8e98e5-038f-46d7-b4c6-412a2b6e394b) service to localhost/127.0.0.1:37897 2023-07-19 18:15:14,514 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data3/current/BP-632430600-172.31.14.131-1689790508583] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:14,515 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data4/current/BP-632430600-172.31.14.131-1689790508583] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:14,516 WARN [Listener at localhost/41015] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 18:15:14,519 INFO [Listener at localhost/41015] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:14,623 WARN [BP-632430600-172.31.14.131-1689790508583 heartbeating to localhost/127.0.0.1:37897] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 18:15:14,623 WARN [BP-632430600-172.31.14.131-1689790508583 heartbeating to localhost/127.0.0.1:37897] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-632430600-172.31.14.131-1689790508583 (Datanode Uuid e8e9f8e6-dfa9-4ff3-a1e1-03fff615490a) service to localhost/127.0.0.1:37897 2023-07-19 18:15:14,624 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data1/current/BP-632430600-172.31.14.131-1689790508583] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:14,624 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/93ab9cb0-14c8-d447-24b2-0576287230f8/cluster_00949c9f-c893-0ee4-4451-def27e610c6f/dfs/data/data2/current/BP-632430600-172.31.14.131-1689790508583] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 18:15:14,635 INFO [Listener at localhost/41015] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 18:15:14,749 INFO [Listener at localhost/41015] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-19 18:15:14,778 INFO [Listener at localhost/41015] hbase.HBaseTestingUtility(1293): Minicluster is down